About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
JPDC
Paper
Fast neural network training on a cluster of GPUs for action recognition with high accuracy
Abstract
We propose algorithms and techniques to accelerate training of deep neural networks for action recognition on a cluster of GPUs. The convergence analysis of our algorithm shows it is possible to reduce communication cost and at the same time minimize the number of iterations needed for convergence. We customize the Adam optimizer for our distributed algorithm to improve efficiency. In addition, we employ transfer-learning to further reduce training time while improving validation accuracy. For the UCF101 and HMDB51 datasets, the validation accuracies achieved are 93.1% and 67.9% respectively. With an additional end-to-end trained temporal stream, the validation accuracies achieved for UCF101 and HMDB51 are 93.47% and 81.24% respectively. As far as we know, these are the highest accuracies achieved with the two-stream approach using ResNet that does not involve computationally expensive 3D convolutions or pretraining on much larger datasets.