Scaling Generative Quantum Machine Learning
With the advancement of quantum technology, researchers aim to understand if and how quantum algorithms could have advantages compared to their classical counterparts, e.g., in the context of machine learning. The investigation of possible benefits of quantum compared to classical machine learning models requires thorough theoretical as well as empirical studies. In this context, various quantum machine learning algorithms have been studied that are based on short-depth, parameterized quantum circuits, which are well suited for execution on near-term quantum hardware. These models are promising candidates for a set of near-term empirical studies targeted to understand the applicability of quantum machine learning. However, as shown by a variety of research training these models can become challenging, especially at increasing scale. In this talk, we discuss a set of challenges that particularly generative quantum machine learning has to face and demonstrate potential remedies thereof in experiments with non-trivial qubit numbers. The illustration of a set of empirical results will additionally accentuate the presented problems and approaches.