Interpretable machine learning via convex cardinal shape composition
For safety reasons, the interpretability of models is important in consequential applications of supervised classification in which predictions are used to support human decision makers. In this paper, we extend cardinal shape composition, a new method developed in the image processing and computer vision literature for image segmentation, to general machine learning problems. Our transformation results in a computationally-tractable ℓ1-regularized hinge loss optimization over a shape dictionary. This approach yields human-interpretable models with an appropriate choice of atomic shapes in the dictionary used to compose decision boundaries.