About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
APS March Meeting 2021
Talk
The power of quantum neural networks
Abstract
Fault-tolerant quantum computers offer the promise of dramatically improving machine learning. In the near-term, however, the benefits of quantum machine learning are not so clear. Expressibility and trainability of quantum models–and quantum neural networks in particular–require further investigation. In this work, we use tools from information geometry to define a notion of expressibility for quantum and classical models. The effective dimension, which depends on the Fisher information, is used to prove a novel generalisation bound and establish a robust measure of expressibility. We show that quantum neural networks achieve a better effective dimension than classical neural networks. To understand the trainability of quantum models, we connect the Fisher information to barren plateaus, the problem of vanishing gradients. Importantly, quantum neural networks can show resilience to this phenomenon and train faster than classical models due to their favourable optimisation landscapes, captured by a more evenly spread Fisher information spectrum. Our work is the first to demonstrate that well-designed quantum neural networks offer an advantage over classical neural networks through a higher effective dimension and faster training ability, which we verify on real quantum hardware.