Publication
JMLR
Paper

Lower bounds and hardness amplification for learning shallow monotone formulas

Abstract

Much work has been done on learning various classes of "simple" monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn shallow monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query (SQ) model. We introduce a new approach to understanding the learnability of "simple" monotone functions that is based on a recent characterization of Strong SQ learnability by Simon (2007) Using the characterization we first show that depth-3 monotone formulas of size n0(1) cannot be learned by any polynomial-time SQ algorithm to accuracy 1 - 1/(logn)o(1). We then build on this result to show that depth-4 monotone formulas of size n0(1) cannot be learned even to a certain 1/2 + 0(l) accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of "mildly hard" learning problems in either the PAC or SQ framework. This hardness amplification for learning builds on the ideas in the work of O'Donnell (2004) on hardness amplification for approximating functions using small circuits, and is applicable to a number of other contexts. Finally, we demonstrate that our approach can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas. © 2011 V. Feldman, H.K. Lee & R.A. Servedio.

Date

Publication

JMLR

Authors

Topics

Share