About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Conference paper
Convergence of the nearest neighbor rule
Abstract
If the nearest neighbor rule is used to classify unknown samples then Cover and Hart have shown that the average probability of error using n known samples (denoted by Rn)converges to a number R as n tends to infinity where R∗ ≤ R ≤ 2R∗ (1-R∗) and R∗ is the Bayes probability of error. Here it is shown that when the samples lie in n-dimensional Euclidean space, the probability of error for the nearest nearest neighbor rule conditioned on the n known samples (denoted by Ln so that ELn = Rn) converges to R with probability 1 for mild continuity and moment assumptions on the class densities. Two estimates of R from the n known samples are shown to be consistent. Rates of convergence of Ln to R are also given.