Supervised deep learning depends on labeled datasets to define objective categorization of subject matter, but annotation is typically quite expensive for specialized domains. The ISIC 2017 skin lesion and BCCD blood cell image datasets are used to represent complex medical annotation scenarios, where domain knowledge is not permitted in either preprocessing or feature extraction. A low complexity supervision method is proposed, based on an iterative machine learning algorithm that fulfills the requirements for cognitive-assisted labeling. The visualization and editing of feature spaces is demanded where new label information must be integrated to improve the embedding quality as feedback mechanism. The annotators ability for fast local homogeneity assessment is leveraged through compound labeling prospects, which is the basis for achieving efficient labeling. Improved unsupervised feature extraction is hypothesized to reduce the labeling burden so the best feature extractors are empirically located at the various depths in ImageNet-pretrained convolutional neural networks, including VGG-16, Inception-v4 and Inception-Resnet-v2. Annotator emulation is performed to simulate upper bounds of achievable labeling efficiency and to explore active learning dynamics. A two-fold increase in efficiency is shown in the case of partial labeling, despite the complexity of the skin lesion data and the marginal improvement with pretrained features.