About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
DIDL 2017
Conference paper
Balanced system design for distributed deep learning with fast GPUs
Abstract
The field of Deep Learning has benefitted greatly by the availability of accelerators like Graphics Processing Units (GPUs) and of open source deep learning frameworks which can exploit these GPUs. As the GPUs become faster, a key aspect of system design is being able to supply them data at a rate where they can be kept busy. A balanced system design is important. In this paper, we investigate these issues with experiments on a distributed deep learning system called Phalanx. Phalanx is a data parallel distributed deep learning system which exploits Caffe as its basic learning engine in the nodes. These nodes run on GPUs and uses infiniband for communication1.