Fault-tolerant quantum computers offer the promise of dramatically improving machine learning. In the near-term, however, the benefits of quantum machine learning are not so clear. Expressibility and trainability of quantum models–and quantum neural networks in particular–require further investigation. In this work, we use tools from information geometry to define a notion of expressibility for quantum and classical models. The effective dimension, which depends on the Fisher information, is used to prove a novel generalisation bound and establish a robust measure of expressibility. We show that quantum neural networks achieve a better effective dimension than classical neural networks. To understand the trainability of quantum models, we connect the Fisher information to barren plateaus, the problem of vanishing gradients. Importantly, quantum neural networks can show resilience to this phenomenon and train faster than classical models due to their favourable optimisation landscapes, captured by a more evenly spread Fisher information spectrum. Our work is the first to demonstrate that well-designed quantum neural networks offer an advantage over classical neural networks through a higher effective dimension and faster training ability, which we verify on real quantum hardware.