Progress-based Container Scheduling for Short-lived Applications in a Kubernetes Cluster
In the past decade, we have envisioned enormous growth in the data generated by different sources, ranging from weather sensors and customer purchasing records to Internet of Things devices. Emerging data-driven technologies have been reforming our daily life for years, such as Amazon Personalize , which creates real-time individualized recommendations for customers according to multidimensional data analytics. It is, however, a challenging task to fully utilize and harness the potential of data, especially big data, due to Volume, Velocity, Variety, Variability and Value (5Vs) . Most businesses thus choose to migrate their hardware demands to cloud providers, such as Amazon Web Service , which is powered by hundreds of thousands of servers. A cluster that builds up by a number of cloud servers is a basic management unit to provide shared computing resources. The typical structure of a cluster consists of managers and workers. When a job arrives at the cluster, as the first step, managers have to select a worker to host the incoming job. Traditionally, the selection process is based on the state of the workers, e.g., resource availability and specifications of jobs, e.g., labels, zones and regions. With respect to currently running jobs, we propose a progress based container placement scheme, named ProCon. When scheduling incoming containers, ProCon not only considers instant resource utilization on the workers but also takes into account the estimation of future resource usage. Through monitoring the progress of running jobs, ProCon balances the resource contentions across the cluster and reduces the completion time as well as the makespan. Specifically, extensive experiments prove that ProCon reduces completion time by up to 53.3% for a particular job and improves overall performance by 23.0%. Additionally, ProCon records an improvement of makespan for up to 37.4% when compared to the default scheduler available in Kubernetes.