Towards a human-in-the-loop library for tracking hyperparameter tuning in deep learning development
The development lifecycle of Deep Learning (DL) models requires humans (the model trainers) to analyze and steer the training evolution. They analyze intermediate data, fine-tune hyperparameters, and stop when a resulting model is satisfying. The problem is that existing solutions for DL do not track the trainer actions. There are no explicit data relationship between trainer action with the input data and hyperparameters to the output performance results, throughout the training process. This jeopardizes online training data analyses and post-hoc results reproducibility, reusability, and understanding. This paper presents DL-Steer, our first prototype to aid trainers to fine-tune hyperparameters and for tracking trainer steering actions. Tracked data are stored in a relational database for online and post-hoc data analyses.