Deep learning has demonstrated an ability to substantially improve the state of the art for understanding the visual content of images. Much of the recent progress has been measured in the context of the ImageNet large-scale visual recognition challenge, with a modest subset of 1.2 million images, labeled according to 1,000 concepts of the full ImageNet dataset. Few published results have applied learning to the full ImageNet dataset of 14 million images over nearly 22,000 concepts. This is partly due to the substantial time and computational resources needed to perform adequate training, from such a large dataset, despite exploitation of graphics processing units. To achieve this scale of training, we use Phalanx, a distributed deep learning framework being developed by IBM. Phalanx is a distributed framework with a parameter server as the hub and multiple learners that employ the open source Caffe platform as spokes. Using Phalanx on the full ImageNet dataset, we performed experiments that demonstrate the impact of large-scale learning on multiple training scenarios. This paper includes fine-tuning, where a pretrained model is used as the basis for further training, as well as use of pretrained models for learning deep feature embeddings.