Publication
SERVICES 2019
Conference paper

MLModelScope: Evaluate and introspect cognitive pipelines

View publication

Abstract

The current landscape of cognitive pipelines exercises many Machine Learning (ML) and Deep Learning (DL) building blocks. These ML and DL building blocks leverage non-uniform frameworks, models, and system stacks. Currently, there is no end-to-end tool that facilitates ML and DL building blocks evaluation and introspection within cognitive pipelines. Due to the absence of such tools, the current practice for evaluating and comparing the benefits of hardware or software innovations on end-to-end cognitive pipelines is both arduous and error-prone - stifling the rate of adoption of innovations. We propose MLModelScope: a hardware/software agnostic platform to facilitate evaluation and introspection of cognitive pipelines in the cloud or on the edge. We describe the design and implementation of MLModelScope and show how it provides a holistic view of the execution of components within cognitive pipelines. MLModelScope aids application developers in experimenting with and discovering cognitive models, data scientists in comparing and evaluating published algorithms, and system architects in optimizing system stacks for cognitive applications.

Date

01 Jul 2019

Publication

SERVICES 2019

Authors

Share