Publication
DIDL 2017
Conference paper

Balanced system design for distributed deep learning with fast GPUs

View publication

Abstract

The field of Deep Learning has benefitted greatly by the availability of accelerators like Graphics Processing Units (GPUs) and of open source deep learning frameworks which can exploit these GPUs. As the GPUs become faster, a key aspect of system design is being able to supply them data at a rate where they can be kept busy. A balanced system design is important. In this paper, we investigate these issues with experiments on a distributed deep learning system called Phalanx. Phalanx is a data parallel distributed deep learning system which exploits Caffe as its basic learning engine in the nodes. These nodes run on GPUs and uses infiniband for communication1.

Date

11 Dec 2017

Publication

DIDL 2017

Authors

Share