# On the role of entanglement and statistics in learning

## Abstract

In this work we make progress in understanding the relationship between learning models when given access to entangled measurements, separable measurements and statistical measurements in the quantum statistical query ($\QSQ$) model. To this end, we show the following~results. \textbf{Entanglement versus separable measurements.} The goal here is to learn an unknown $f$ from the concept class $\Cc\subseteq {f:\01^n\rightarrow [k]}$ given copies of $\frac{1}{\sqrt{2^n}}\sum_x \ket{x,f(x)}$. We show that, if $T$ copies suffice to learn $f$ using entangled measurements, then $O(nT^2)$ copies suffice to learn $f$ using just separable measurements. %Additionally, we exhibit a concept class $\Cc$ for which, in order to learn some \emph{property} of $f$, the sample complexity of learning using entangled measurements is exponentially smaller than separable measurements. \textbf{Entangled versus statistical measurements} The goal here is to learn a function $f \in \Cc$ given access to separable measurements and statistical measurements. We exhibit a concept class $\Cc$ based of degree-$2$ functions that gives an exponential separation between $\QSQ$ learning and quantum learning with entangled measurements (even in the presence of noise). This proves the ``quantum analogue" of the seminal result of Blum et al.~\cite{blum2003noise} that separates classical $\SQ$ learning from classical $\PAC$ learning with classification~noise. \textbf{$\QSQ$ lower bounds for learning states.} The main technical contribution is to introduce a quantum statistical query dimension ($\QSDA$), which we use to give lower bounds on the $\QSQ$ complexity of learning. Using this, we prove exponential $\QSQ$ lower bounds for testing purity of quantum states, learning CCHL states, coset states of Abelian groups, degree-$2$ functions, planted bi-clique states and learning output states of Clifford circuits of depth $\polylog(n)$. \textbf{Further applications.} Using our $\QSQ$ lower bounds give an \emph{unconditional} separation between weak and strong error mitigation and prove lower bounds for learning distributions in the $\QSQ$ model. Prior works by Quek et al.~\cite{quek2022exponentially}, Hinsche et al.~\cite{hinsche2022single} and Neitner et al.~\cite{sweke23} proved the analogous results \emph{assuming} diagonal measurements and our work removes this~assumption.