Publication
MLSP 2016
Conference paper

Learning sparse two-level boolean rules

View publication

Abstract

This paper develops a novel optimization framework for learning accurate and sparse two-level Boolean rules for classification, both in Conjunctive Normal Form (CNF, i.e. AND-of-ORs) and in Disjunctive Normal Form (DNF, i.e. OR-of-ANDs). In contrast to opaque models (e.g. neural networks), sparse two-level Boolean rules gain the crucial benefit of interpretability, which is necessary in a wide range of applications such as law and medicine and is attracting considerable attention in machine learning. This paper introduces two principled objective functions to trade off classification accuracy and sparsity, where 0-1 error and Hamming loss are used to characterize accuracy. We propose efficient procedures to optimize these objectives based on linear programming (LP) relaxation, block coordinate descent, and alternating minimization. We also describe a new approach to rounding any fractional values in the optimal solutions of LP relaxations. Experiments show that our new algorithms based on the Hamming loss objective provide excellent tradeoffs between accuracy and sparsity with improvements over state-of-the-art methods.

Date

08 Nov 2016

Publication

MLSP 2016

Authors

Share