About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
HiPC 2019
Conference paper
Acceleration of Sparse Vector Autoregressive Modeling Using GPUs
Abstract
Autoregressive modeling is a standard approach to mathematically describe the behavior of a time series. The vector autoregressive model (VAR) describes the behavior of multiple time series. The VAR modeling is a fundamental approach which has applications in multiple domains such as time series forecasting, Granger causality, system identification and stochastic control. Solving high dimensional VAR model requires the use of sparse regression techniques from machine learning. Efficient algorithms to solve the sparse regression problems are too slow to be useful in solving large high dimensional sparse VAR modeling problems. Earlier application of sparse VAR modeling in the neuroimaging domain required the use of the IBMs Blue Gene supercomputers. In this paper we describe an approach to accelerate large scale sparse VAR problems when solved using the lasso regression algorithm on state-of-the-art GPUs. Our accelerated implementation on NVIDIA GTX 1080 GPU takes a few seconds to solve the problem, reaching up to 4 TFLOPs of single-precision performance which is close to 55% of its peak matrix-multiply (GEMM) performance.