Skip to main contentIBM 

Exploring the potential for quantum advantage in mathematical optimization

Recent publications from members of the Quantum Optimization Working Group deliver a fresh perspective on the potential for quantum computers to demonstrate value for interesting combinatorial optimization problems.


5 Feb 2025

Stefan Woerner

Ryan Mandelbaum

Robert Davis

Optimization problems touch every facet of your life. You decide the best order to run errands to save time, or use your maps app to find an optimal driving route that avoids traffic. Optimization problems also appear widely in the industries that touch our everyday lives. They are crucial in the operation of energy grids, supply chains, and more—helping to ensure stability, safety, and efficiency.

In some cases, we solve these optimization problems using our intuition, or by making common-sense judgments based on prior experience. However, when the considered tasks grow too complex, we need to represent them as optimization problems that we can tackle with mathematics. Over the years, computational methods for solving combinatorial optimization problems have come to play an important role in business and science, and help to solve many of these problems efficiently. Nevertheless, some combinatorial optimization problems remain extremely challenging, even for state-of-the-art methods run on the most powerful classical supercomputers.

Quantum computers have the potential to augment our ability to solve optimization problems. However, questions remain as to which quantum methods and optimization problem classes will enable us to achieve practical quantum advantage. So far, those questions have been difficult to answer because the field of quantum optimization has lacked systematic, reproducible benchmarks that allow us to make fair performance comparisons between quantum and purely classical optimization algorithms—especially given the limitations of what researchers can actually run on current quantum hardware. A white paper recently published in Nature Reviews Physics by representatives of the Quantum Optimization Working Group aims to address these challenges.

Co-authored by 46 members of the working group—including representatives from enterprise and academic research institutions like University of Amsterdam, MIT, Zuse Institute Berlin, IBM, Hartree Centre, E.ON, Wells Fargo, and more—the white paper provides a broad overview of optimization methods and explores the potential for quantum advantage in a variety of problem settings. Moreover, it explores potential strategies to improve existing quantum methods. A notable example of this which the white paper mentions comes from promising new collaborative research conducted by IBM, Los Alamos National Lab, and the University of Basel. That research details how applying something known as the Conditional Value at Risk (CVaR) to the samples returned from a quantum computer may be useful for various optimization tasks. More on that later.

The white paper also outlines the essential building blocks for quantum optimization algorithms, and proposes clear metrics and benchmarking problems to facilitate meaningful comparisons between quantum and classical optimization techniques. With the ultimate aim of accelerating progress toward quantum advantage in the field of combinatorial optimization, the paper serves as an important reminder that useful quantum advantages for optimization problems remain well within the realm of theoretical possibility, despite doubts that have been expressed by some in the research community.

Examining the challenges and opportunities of quantum optimization

Quantum optimization is likely one of the most misunderstood and polarizing domains in quantum algorithms research. Early claims suggested quantum computers could efficiently solve hard optimization problems by exploring every possible solution in parallel—a misconception that still shows up occasionally. Others argue that quantum optimization methods can at best provide a quadratic speedup over classical approaches that scale exponentially with problem size—ultimately yielding little value since a quadratic speedup over an exponential runtime is still an exponential runtime.

In the new optimization white paper, the authors explain that the truth, as usual, is more nuanced than either of these extremes. Quantum computers do not explore every solution in parallel, at least not in a way that would provide immediate value for optimization. However, the assumption that quantum methods provide only a quadratic speedup for optimization problems is usually based on the worst-case version of problems, and based on the assumption that we have been tasked with finding an optimal solution that is provably better than all possible alternatives.

In the real world, worst-case optimization problem instances are rarely relevant to practical use cases. This is one of the key reasons classical algorithms have proven so useful in practical settings, despite the well known exponential runtimes required to generally solve these problems to provable optimality. In practice, classical algorithms and heuristics can obtain good solutions for many useful problems, even at large problem sizes.

However, there is still a lot of potential to be unlocked through better optimization algorithms. In fields like finance and supply chain management, even small improvements can make a huge impact. Solving real-world problems with purely classical methods involves simplifying them first, for instance, by removing certain factors or approximating the involved dynamics, and finding good solutions for more realistic models could make a huge difference. For some problems, classical algorithms fail to find good solutions altogether—even at relatively small problem sizes.

Quantum computing offers a new set of tools and capabilities that may improve our ability to tackle at least some of the real-world optimization problems that are classically challenging. A “good-enough” solution from a quantum algorithm that’s somehow better, that’s less costly, or that can be obtained faster than any classical solution has the potential to be immensely valuable.

Is advantage in quantum optimization really possible?

As far as we know, quantum computers will never provide exponential speedups for all instances of all optimization problems. However, we do know special cases of problems that gain exponential speedups from quantum optimization techniques over classical alternatives.

For example, some optimization problems can be reduced to integer factoring, where Shor’s algorithm offers an exponential speedup over all known classical methods. This is the most prominent example of exponential advantage in quantum optimization, although not the most useful one, as such problems are unlikely to appear in practice. Still, there are other cases that promise similar speed-ups for efficiently finding better-than-classical solutions to more relevant problem classes, such as this recent result from researchers at Google, and examples like these are encouraging for quantum optimization in general.

While quantum algorithms with provable performance guarantees like the above-mentioned usually require fault tolerance, we also already have quantum optimization algorithms that we can run on today’s noisy hardware, and which are already capable of returning good solutions for at least some problems. It is entirely possible that new algorithms—or new variants of existing algorithms—will be discovered that enable quantum methods to rival or surpass the solution quality of classical methods using noisy hardware.

These algorithms are usually heuristics, particularly if executed on a noisy quantum computer. Heuristics are algorithms without a priori performance guarantees that are often based on a deep intuitive understanding of the considered problem, or which are achieved by terminating exact optimization algorithms prematurely.

Heuristics appear all throughout both quantum and classical computing. In fact, most classical optimization algorithms that we use in practice are heuristics. Genetic algorithms, A* search, and simulated annealing are all examples of influential classical optimization methods that developers have used for decades and that often work very well in practice.

All of this is to say that, despite their lack of performance guarantees, heuristics are not a bad thing. In many cases, they are the best we can hope for because we know these problems to be difficult in general for both quantum and classical methods. What we need are new quantum heuristics that work well for some of those problems where classical algorithms struggle. Thus, our job as a community is to find exact, approximate, or heuristic quantum optimization algorithms that work better than any classical technique—at least for some problems—as well as develop systematic benchmarks to understand the path to practical quantum advantage in optimization.

New quantum optimization research makes an impact

Today, researchers are beginning to demonstrate that quantum computers, with the help of classical post-processing methods called error mitigation, can deliver accurate expectation values for certain valuable problems. Given a system and a property we want to measure, the expectation value is a weighted average of the possible outcomes from the quantum computer. These are very useful calculations for chemistry and physics, but for optimization, we're often more interested in samples outputted from the processors, rather than expectation values.

Now, a paper recently published in Nature Computational Science highlights how to extract value when sampling from noisy quantum computers in the near-term. Specifically, the authors formally show how to determine the sampling overhead to compensate for noise in quantum optimization algorithms. In this particular context, the overhead turns out to be only a fraction of the computational overhead we get from error mitigation methods used to obtain unbiased estimators of expectation values.

Learn more about incorporating CVaR into your optimization workflows in our tutorial on IBM Quantum Learning.

In other words, if you have a quantum circuit that you know with reasonable probability will return a good solution to your optimization problem, you can now quantify the additional number of samples you need to draw from that circuit to counteract the effects of noise. This is a powerful insight that is essentially telling you both how to quantify the effect of the noise in your circuit, and what you must do to compensate for it.

Building on this, the authors show that a function called the Conditional Value at Risk (CVarR) can give us provable bounds on the noise-free expectation values—again with significantly less overhead than the error mitigation methods for expectation values.

What is CVaR, exactly? CVaR, or expected shortfall, is an important function used in finance that gives information about the tail of a distribution. In finance, it tells an investor the average amount of money they can expect to lose when the market turns south. In the context of quantum optimization, because we know how much more often we need to sample to get solutions that are at least as good as the noise-free case, we can also use it to derive lower and upper bounds for the expectation value of interest—i.e., we can potentially use it to identify the minimum value that a solution to our optimization problem can achieve.

Where other more common error mitigation methods like probabilistic error cancellation (PEC) and zero noise extrapolation (ZNE) are computationally expensive and provide exact expectation values, CVaR is computationally cheap and provides boundaries on expectation values, rather than an exact output. The result is noise-free information concerning the outputs of a quantum computer—information that we can use in our quantum optimization algorithms. 

CVaR was previously proposed in 2019 as a robust loss function to train parametrized circuits and is still very popular today. However, back then its usage was motivated solely by intuition. These new results close the gap in our theoretical understanding of the CVaR as a loss function and demonstrate its error mitigating capabilities for training parametrized circuits.

Optimizing as a community

Theoretical results like those seen in the recent CVaR paper show potential directions to scale quantum optimization methods on noisy hardware. They also highlight how important it is that we begin combining theory work with empirical research to explore the potential of theoretical proposals in practice.

In classical computing, there are many examples of algorithms that are shown to work well empirically long before they are fully understood from a theoretical perspective. Things are different in quantum computing, where the limited capabilities and availability of quantum hardware have historically meant that quantum algorithms are primarily developed theoretically before they are tested empirically.

However, these limitations are becoming less and less relevant every day. Now that the quantum community has access to gate-based quantum devices with hundreds of qubits that are capable of running circuits beyond exact classical simulation methods, we must begin doing more work that combines theoretical and empirical research. This also highlights the importance of systematic benchmarking to understand the path towards quantum advantage in optimization.

We’re eager to see continued progress in this field. For a more detailed perspective on the path toward quantum advantage in combinatorial optimization, be sure to read the Quantum Optimization Working Group’s full optimization white paper in Nature Reviews Physics and the recent CVaR paper in Nature Computational Science. You can learn more about this and other quantum working groups in our previous blog, here, and explore our CVaR tutorial here.


Quantum starts here