Workload Management for Power Efficiency in Heterogeneous Data Centers
Abstract
The cloud computing paradigm has recently emerged as a convenient solution for running different workloads on highly parallel and scalable infrastructures. One major appeal of cloud computing is its capability of abstracting hardware resources and making them easy to use. Conversely, one of the major challenges for cloud providers is the energy efficiency improvement of their infrastructures. Aimed at overcoming this challenge, heterogeneous architectures have started to become part of the standard equipment used in data centers. Despite this effort, heterogeneous systems remain difficult to program and manage, while their effectiveness has been proven only in the HPC domain. Cloud workloads are different in nature and a way to exploit heterogeneity effectively is still lacking. This paper takes a first step towards an effective use of heterogeneous architectures in cloud infrastructures. It presents an in-depth analysis of cloud workloads, highlighting where energy efficiency can be obtained. The microservices paradigm is then presented as a way of intelligently partitioning applications in such a way that different components can take advantage of the heterogeneous hardware, thus providing energy efficiency. Finally, the integration of microservices and heterogeneous architectures, as well as the challenge of managing legacy applications, is presented in the context of the OPERA project.