Publication
JPDC
Paper

A Hitchhiker's Guide On Distributed Training Of Deep Neural Networks

View publication

Abstract

Deep learning has led to tremendous advancements in the field of Artificial Intelligence. One caveat, however, is the substantial amount of compute needed to train these deep learning models. Training a benchmark dataset like ImageNet on a single machine with a modern GPU can take up to a week and distributing training on multiple machines has been observed to drastically bring this time down. Recent work has brought down ImageNet training time to as low as 4 min by using a cluster of 2048 GPUs. This paper surveys the various algorithms and techniques used in distributed training and presents the current state of the art for a modern distributed training framework. More specifically, we explore the synchronous and asynchronous variants of distributed Stochastic Gradient Descent, various All Reduce gradient aggregation strategies and best practices for obtaining higher throughput and lower latency over a cluster such as mixed precision training, large batch training, and gradient compression.

Date

01 Mar 2020

Publication

JPDC

Authors

Share