Data management in the MIRABEL smart grid system
Matthias Boehm, Lars Dannecker, et al.
EDBT/ICDT-WS 2012
Large-scale machine learning algorithms are often iterative, using repeated read-only data access and I/O-bound matrix-vector multiplications to converge to an optimal model. It is crucial for performance to fit the data into single-node or distributed main memory and enable fast matrix-vector operations on in-memory data. General-purpose, heavy- and lightweight compression techniques struggle to achieve both good compression ratios and fast decompression speed to enable block-wise uncompressed operations. Therefore, we initiate work—inspired by database compression and sparse matrix formats—on value-based compressed linear algebra (CLA), in which heterogeneous, lightweight database compression techniques are applied to matrices, and then linear algebra operations such as matrix-vector multiplication are executed directly on the compressed representation. We contribute effective column compression schemes, cache-conscious operations, and an efficient sampling-based compression algorithm. Our experiments show that CLA achieves in-memory operations performance close to the uncompressed case and good compression ratios, which enables fitting substantially larger datasets into available memory. We thereby obtain significant end-to-end performance improvements up to 9.2 x.
Matthias Boehm, Lars Dannecker, et al.
EDBT/ICDT-WS 2012
Matthias Boehm, Berthold Reinwald, et al.
VLDB
Paul G. Brown, Peter J. Haas
VLDB 2003
Wang-Chiew Tan, Peter J. Haas, et al.
IHI 2012