Publication
Theoretical Computer Science
Paper

Separating models of learning with faulty teachers

View publication

Abstract

We study the power of two models of faulty teachers in Valiant's PAC learning model and Angluin's exact learning model. The first model we consider is learning from an incomplete membership oracle introduced by Angluin and Slonim [D. Angluin, D.K. Slonim, Randomly fallible teachers: Learning monotone DNF with an incomplete membership oracle, Machine Learning 14 (1) (1994) 7-26]. In this model, the answers to a random subset of the learner's membership queries may be missing. The second model we consider is random persistent classification noise in membership queries introduced by Goldman, Kearns and Schapire [S. Goldman, M. Kearns, R. Schapire, Exact identification of read-once formulas using fixed points of amplification functions, SIAM Journal on Computing 22 (4) (1993) 705-726]. In this model, the answers to a random subset of the learner's membership queries are flipped. We show that in both the PAC and the exact learning models the incomplete membership oracle is strictly stronger than the noisy membership oracle under the assumption that the problem of PAC learning parities with random classification noise is intractable. We also show that under the standard cryptographic assumptions the incomplete membership oracle is strictly weaker than the perfect membership oracle. This generalizes the result of Simon [H. Simon, How many missing answers can be tolerated by query learners? Theory of Computing Systems 37 (1) (2004) 77-94] and resolves an open question of Bshouty and Eiron [N. Bshouty, N. Eiron, Learning monotone DNF from a teacher that almost does not answer membership queries, Journal of Machine Learning Research 3 (2002) 49-57]. © 2009 Elsevier B.V. All rights reserved.

Date

28 Apr 2009

Publication

Theoretical Computer Science

Authors

Topics

Share