Meeting latency target in transient burst: A case on spark streaming
Real-time processing of big data has become a core operation in various areas of business, such as extracting value from real-time social network data. Big data workloads in the wild show a strong temporal variability that not only poses the risk of slow responsiveness in data analysis, but also leads to a high risk of service outage. The recent development of batch streaming systems based on the MapReduce framework is shown effective on non-overloaded systems. However, little is known on how to enhance the performance of the batch streaming systems for bursty workloads. In this paper, we propose a latency-driven data controller, Dslash, which aims to process as much data as possible, while processing these as fast as the application target latency and system capacity allow. In particular, we implement Dslash on Spark Streaming - an emerging and complex batch streaming system. Dslash features include (i) placing data in an augmented distributed memory, (ii) shedding out-of-date data, (iii) improving the processing locality of Map tasks, and (iv) delaying data processing in transient overloads. Extensive evaluations on a large number of workloads show that Dslash can ensure stable and fast responsiveness compared to vanilla Spark Streaming systems.