With the increasing thrust on sustainability and climate change mitigation by policy makers and regulators, enterprises have been mandated to lower carbon emissions from their operations. For cloud operators, this necessitates reducing the energy consumed in their data centers. One predominant way of achieving this is by increasing the average server utilization, which is still low, to absorb occasional peaks and avoid interference from co-located workloads. For this purpose, we propose novel energy-efficiency and compute performance models of servers using normalized metrics across workloads and servers, accounting for both CPU and memory usage. These models can be used to formulate problems that can optimize energy efficiency and performance and design a higher-level framework for determining the right amount of extra provisioning necessary to ensure a desired performance and thereby reduce under-utilization, when multiple workloads share a common platform.