Publication
CNSM 2015
Conference paper

On selective compression of primary data

View publication

Abstract

With the advent of social media, Internet of Things (IoT), widespread use of richer media formats such as video, and generally increased use of mobile devices, volume of online data has seen a rapid increase in recent years. To cope with this data explosion, businesses and cloud providers are scrambling to lower the cost of storing data without sacrificing the quality of their service using space reduction techniques such as compression and deduplication. Capacity savings, however, are achieved at the cost of performance and additional resource overheads. One drawback of compression techniques is the additional computation required to store and fetch data, which may significantly increase response time, i.e., I/O latency. Worse yet, inefficient compression algorithms that fail to compress data satisfactorily suffer from the latency penalty with marginal capacity savings, e.g., deciding to compress data that is encrypted or already compressed. Therefore, from a data center administrator's perspective, we should pick the set of volumes that will yield the most compression space saving with the least latency for a given amount of computation capacity, without exhaustively inspecting the data content of volumes. To fill this void, this paper proposes an approach to manage compression for a very large set of volumes. It maximizes capacity savings and minimizes latency impact without scanning the actual data content (to avoid security concerns). Our pilot deployments show significant capacity savings and performance improvements compared to benchmark compression strategies.

Date

28 Dec 2015

Publication

CNSM 2015

Authors

Share