Convolutional Neural Networks (ConvNets/CNNs) have revolutionized the research in computer vision, due to their ability to capture complex patterns, resulting in high inference accuracies. However, the increasingly complex nature of these neural networks means that they are particularly suited for server computers with powerful GPUs. We envision that deep learning applications will be eventually widely deployed on mobile devices, e.g., smartphones, self-driving cars, and drones. Therefore, in this paper, we aim to understand the resource requirements of CNNs on mobile devices in terms of compute time, memory, and power. First, by deploying several popular CNNs on different mobile CPUs and GPUs, we measure and analyze the performance and resource usage for the CNNs on a layerwise granularity. Our findings point out the potential ways of optimizing the CNN pipelines on mobile devices. Second, we model resource requirements of core computations of CNNs. Finally, based on the measurement and modeling, we build and evaluate our modeling tool, Augur, which takes a CNN configuration (descriptor) as the input and estimates the compute time, memory, and power requirements of the CNN, to give insights about whether and how efficiently a CNN can be run on a given mobile platform.