With the advent of i-vectors, linear discriminant analysis (LDA) has become an integral part of many state-of-the-art speaker recognition systems. Here, LDA is primarily employed to annihilate the non-speaker related (e.g., channel) directions, thereby maximizing the inter-speaker separation. The traditional approach for computing the LDA transform uses parametric representations for both intra- And inter-speaker scatter matrices that are based on the Gaussian distribution assumption. However, it is known that the actual distribution of i-vectors may not necessarily be Gaussian, and in particular, in the presence of noise and channel distortions. Motivated by this observation, we present an alternative non-parametric discriminant analysis (NDA) technique that measures both the within- And between-speaker variation on a local basis using the nearest neighbor rule. The effectiveness of the NDA method is evaluated in the context of noisy speaker recognition tasks using speech material from the DARPA Robust Automatic Transcription of Speech (RATS) program. Experimental results indicate that the NDA is more effective than the traditional parametric LDA for speaker recognition under noisy and channel degraded conditions.