Speech and language disturbances have been observed from the early stages of Alzheimer's disease (AD), including mild cognitive impairment (MCI), and speech analysis has been expected to help as a screening tool for early detection of AD and MCI. However, the questions of whether and how automatic speech analysis, including speech recognition by a self-administered tool, can be used for such detection remain largely unexplored. In this study, we performed automatic analysis of speech data collected via a mobile application from 114 older participants during cognitive tasks. The goal was to classify AD, MCI, and cognitively normal (CN) groups by using speech features characterizing acoustic, prosodic, and linguistic aspects. First, we evaluated how accurately linguistic features could be automatically extracted from transcriptions generated by automatic speech recognition (ASR), and we found that the features were highly correlated (r = 0.92) with those extracted from manual transcriptions. Then, a machine-learning speech classifier using these features achieved 78.6% accuracy for classifying AD, MCI, and CN through nested cross-validation (AD versus CN: 91.2% accuracy; MCI versus CN: 87.6% accuracy). Our results suggest the utility and validity of using a mobile application with automatic speech analysis as a self-administered screening tool for early detection of AD and MCI.