Pathological speech involves atypical speech production which may result from several factors including oral diseases, physical disabilities in the voice production system and atypical anatomy. Automatic evaluation of intelligibility in patients with pathological speech can assist accurate diagnosis of pathological conditions. Loss of intelligibility may be associated with one of the several pathological conditions, making automatic evaluation a challenging computational problem. A Mixture of Experts (MoE) models class boundaries using a weighted combination of several experts and can characterize the complex class boundaries arising due to pathological variability. We train an MoE for intelligibility evaluation using a modified Expectation Maximization (EM) algorithm based on joint simulated annealing-gradient ascent procedure. Our algorithm optimizes the expert parameters and simultaneously obtains the feature subsets for each expert. We observe that the MoE trained using the new EM algorithm not only outperforms a single classifier baseline but also the vanilla MoE. We perform further data analysis and interpret the weights assigned to each expert during inference. Also, we obtain a different feature subset per expert in the mixture. This illustrates feature use based on location of the data point in the feature space.