The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter
Abstract
We observe a training set Q composed of l labeled samples {(X1, θ1), ⋯, (Xl, θl)} and u unlabeled samples {X′1, ⋯, X′u}. The labels θi are independent random variables satisfying Pr{θi = 1} = η, Pr{θi = 2} = 1 - η. The labeled observations Xi are independently distributed with conditional density fθi(·) given θi. Let (X0, θ0) be a new sample, independently distributed as the samples in the training set. We observe X0 and we wish to infer the classification θ0. In this paper we first assume that the distributions f1(·) and f2(·) are given and that the mixing parameter η is unknown. We show that the relative value of labeled and unlabeled samples in reducing the risk of optimal classifiers is the ratio of the Fisher informations they carry about the parameter η. We then assume that two densities g1(·) and g2(·) are given, but we do not know whether g1(·) = f1(·) and g2(·) = f2(·) or if the opposite holds, nor do we know η. Thus the learning problem consists of both estimating the optimum partition of the observation space and assigning the classifications to the decision regions. Here, we show that labeled samples are necessary to construct a classification rule and that they are exponentially more valuable than unlabeled samples. © 1996 IEEE.