Publication
JPDC
Paper

Big vs little core for energy-efficient Hadoop computing

View publication

Abstract

Emerging big data applications require a significant amount of server computational power. However, the rapid growth in the data yields challenges to process them efficiently using current high-performance server architectures. Furthermore, physical design constraints, such as power and density, have become the dominant limiting factor for scaling out servers. Heterogeneous architectures that combine big Xeon cores with little Atom cores have emerged as a promising solution to enhance energy-efficiency by allowing each application to run on an architecture that matches resource needs more closely than a one-size-fits-all architecture. Therefore, the question of whether to map the application to big Xeon or little Atom in heterogeneous server architecture becomes important. In this paper, through a comprehensive system level analysis, we first characterize Hadoop-based MapReduce applications on big Xeon and little Atom-based server architectures to understand how the choice of big vs little cores is affected by various parameters at application, system and architecture levels and the interplay among these parameters. Second, we study how the choice between big and little core changes across various phases of MapReduce tasks. Furthermore, we show how the choice of most efficient core for a particular MapReduce phase changes in the presence of accelerators. The characterization analysis helps guiding scheduling decisions in future cloud-computing environment equipped with heterogeneous multicore architectures and accelerators. We have also evaluated the operational and the capital cost to understand how performance, power and area constraints for big data analytics affect the choice of big vs little core server as a more cost and energy efficient architecture.