Density modeling is the task of learning an unknown probability density function from samples, and is one of the central problems of unsupervised machine learning. In this work, we show that there exists a density modeling problem for which fault-tolerant quantum computers can offer a superpolynomial advantage over classical learning algorithms, given standard cryptographic assumptions. Along the way, we provide a variety of additional results and insights of potential interest for proving future distribution learning separations between quantum and classical learning algorithms. Specifically, we (a) provide an overview of the relationships between hardness results in supervised learning and distribution learning, and (b) show that any weak pseudorandom function can be used to construct a classically hard density modeling problem. The latter result opens up the possibility of proving quantum-classical separations for density modeling based on weaker assumptions than those necessary for pseudorandom functions.