About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
JPDC
Paper
A Hitchhiker's Guide On Distributed Training Of Deep Neural Networks
Abstract
Deep learning has led to tremendous advancements in the field of Artificial Intelligence. One caveat, however, is the substantial amount of compute needed to train these deep learning models. Training a benchmark dataset like ImageNet on a single machine with a modern GPU can take up to a week and distributing training on multiple machines has been observed to drastically bring this time down. Recent work has brought down ImageNet training time to as low as 4 min by using a cluster of 2048 GPUs. This paper surveys the various algorithms and techniques used in distributed training and presents the current state of the art for a modern distributed training framework. More specifically, we explore the synchronous and asynchronous variants of distributed Stochastic Gradient Descent, various All Reduce gradient aggregation strategies and best practices for obtaining higher throughput and lower latency over a cluster such as mixed precision training, large batch training, and gradient compression.