Publication
VHPC 2021
Workshop paper

Datashim and its applications in Bioinformatics

Abstract

Bioinformatics pipelines depend on shared POSIX filesys- tems for its input, output and intermediate data storage. Containeriza- tion makes it more difficult for the workloads to access the shared file systems. In our previous study, we were able to run both ML and non- ML pipelines on Kubeflow successfully. However, the storage solutions were complex and less optimal. In this article, we are introducing a new concept of Dataset and its corre- sponding resource as a native Kubernetes object. We have implemented the concept with a new framework Datashim which takes care of all the low-level details about data access in Kubernetes pods. Its pluggable architecture is designed for the development of caching, scheduling and governance plugins. Together, they manage the entire lifecycle of the custom resource Dataset. We use Datashim to serve data from object stores to both ML and non- ML pipelines on Kubeflow. We feed training data into ML models directly with Datashhim instead of downloading it to the local disks, which makes the input scalable. We have enhanced the durability of training metadata by storing it into a dataset, which also simplifies the set up of the Tensor- Board, independent of the notebook server. For the non-ML pipeline, we have simplified the 1000 Genome Project pipeline with datasets injected into the pipeline dynamically. We have now established a new resource type Dataset to represent the concept of data source on Kubernetes with our novel framework Datashim to manage its lifecycle.