Skip to main contentIBM 

How IBM Quantum is advancing physics with world-leading theory research

IBM Quantum isn’t just building quantum computers — we’re developing an ecosystem around a technology which pushes the limit of human knowledge. In order to navigate forward in this field, we must rely on theoretical physicists to chart the course.

How IBM Quantum is advancing physics with world-leading theory research

9 Mar 2022

Sarah Sheldon

Sergey Bravyi

You can think of IBM Quantum’s theorists as writing quantum computing’s rulebook. Theorists research the ability of classical and quantum computing at the most basic level in order to determine what kinds of advantages quantum might offer. They help create strategies to mitigate errors and improve our device performance in the short term, while figuring out how to correct errors on the long term in order to realize fault-tolerant quantum computers. And, of course, they think about how we can use and manipulate quantum information in order to solve some of computing’s toughest challenges. 

This year, IBM Quantum researchers had five papers accepted to the highly selective Conference on Quantum Information Processing (QIP), the premier annual meeting for quantum information theorists. This year’s conference will feature papers from IBM Quantum researchers covering error mitigation and correction strategies, topology and fractal geometry, the computational complexity of two relevant problems, and  quantum communications. Together, this work is helping IBM Quantum plan for quantum computing’s future.

From error mitigation to error correction

Quantum computers calculate using the mathematics describing extremely small and/or cold systems — mathematical operations beyond the reach of classical computers. However, objects existing in this realm are extremely sensitive to any disturbances, so today’s quantum processors are hampered by noise and errors. Developers hope that they can realize fault-tolerant quantum computation, or quantum computers that can still deliver correct answers, even in the presence of this noise and errors. And they hope that they can correct these errors faster than new errors appear.

There’s a catch, however; while classical computers can make copies of data in order to build in redundancies, the rules of quantum say you can’t make an identical clone of an unknown quantum state — and attempting to measure a quantum state automatically collapses the data into classical information. Therefore, theorists are devising workarounds so that they can deal with errors without losing valuable quantum information. We have a theorem called the threshold theorem that says: provided we can lower the noise of our quantum hardware below a certain rate, then we can efficiently suppress errors for any quantum circuit faster than new errors crop up.

You can think of IBM Quantum’s theorists as writing quantum computing’s rulebook.

Realizing quantum error correction — incorporating redundant qubits to make quantum computations impervious to errors — is no small feat either for hardware developers or for physics theorists. Therefore, in the meantime, researchers are working to develop quantum error mitigation techniques, techniques which lower error rates on noisy hardware without requiring additional qubits. One such paper accepted to QIP,1 authored by IBM’s Christophe Piveteau, David Sutter, Sergey Bravyi, Jay Gambetta, and Kristan Temme attempts to realize quantum computing beyond the reach of classical computers with the help of error mitigation techniques. Error mitigation techniques require less overhead, and we can already implement them on hardware today — but they don’t scale well, and can’t serve as a long-term solution. Instead, error mitigation serves to bridge the gap before we have full-scale quantum error correction.

In order to build scalable quantum computers, we need a set of universal gates which can run over an error-corrected set of qubits. Importantly, these gates must be realized in a fault-tolerant fashion such that the extra errors introduced by the gate can be corrected. Error correction schemes come with a limited set of fault-tolerant gates that can be realized “transversally,” that is, by applying multiple copies of the same gate to the constituent physical qubits. While transversal gates don’t come with much extra hardware overhead, they can realize only a restricted class of gates known as the Clifford group. But Clifford-only computations are not universal and, in fact, can be efficiently simulated on a conventional classical computer.

To unlock the full power of quantum algorithms, we need at least one gate outside the Clifford group. A common choice is a single-qubit “T-gate” which, if you imagine a quantum state as a point on a sphere, rotates it 45 degrees around the Z-axis. We can efficiently compile any quantum algorithm into a sequence of Clifford and T-gates. Unfortunately, making T-gates fault tolerant is a painful process that involves building “magic state factories” and incurs a significant hardware overhead. 

The cost of fault-tolerant T-gates may be prohibitive for near-term quantum devices, even if they are capable of realizing error-corrected qubits and fault-tolerant Clifford gates. Temme, et al.’s paper attempts to split the difference by applying error correction only to the low-overhead Clifford gates, while applying error mitigation techniques to noisy T-gates. According to the paper’s calculations, this allows us to produce circuits beyond the reach of state-of-the-art classical computers without paying the full price of realizing a fault-tolerant universal gate set.

Even as we attempt to realize error correction and error mitigation, we’re still working on deepening our understanding of how qubits work together to handle errors — after all, no one has ever built a fault-tolerant quantum computer before. Another paper2 accepted to QIP, by IBM Quantum’s Guanyu Zhu, Tomas Jochym-O’Connor, and Arpit Dua, digs deep into the core mathematical theory underlying how error correction works: topology.

We can classify different objects based on core properties that remain the same under deformations that don’t require cutting and pasting. For example, you might lump coffee mugs and donuts into the same class because you can squeeze one into the other in that way. The same principles can protect quantum information from noise; by storing quantum information across wide swaths of a quantum processor, you could create a system such that localized noise amounts to those continuous deformations, and thus imparts the system with a memory so that it won’t decohere and forget its quantum information.

However, most of these topological studies have only been carried out by analyzing quantum information distributed over certain kinds of shapes. We looked to see what would happen, and whether this topological order could exist on information distributed on a fractal geometry — one where holes exist at every length scale. This fractal geometry could make the entanglement very fragile.

Previous work has found that this kind of space could support classical memory. But we wanted to know: could it support topological quantum memory, too?

We wanted to know: could it support topological quantum memory, too?

We found that topological order wouldn’t exist on this geometry in two dimensions, but could exist on a fractal embedded in three dimensions. By looking for topological order and the corresponding quantum error correcting codes on these fractal geometries, we are able to efficiently design a way to implement non-Clifford logical gates — hence, fault-tolerant universal quantum computing — with a significant reduction of qubit number and space overhead. Exploring quantum error correction through this realm doesn’t only help us advance quantum computation, but pushes our understanding of the mathematics of geometry and topology overall.

So what can quantum actually do?

While laying out the groundwork for our hardware is a critical task in order to realize fault tolerant quantum computing, there’s still plenty of space where we need to ask: can we rigorously prove that quantum is better than classical? And if so, what benefits can quantum provide us? IBM Quantum theorists are working to answer these questions at the deepest level, serving as the soil from which applications and real-world use cases can grow.

Studying the abilities of quantum and classical computers lives in the realm of computational complexity, which draws borders around different types of problems and sorts them into classes based on how efficiently a classical or quantum computer can solve and/or check them. We often discuss computational complexity based on how long it takes a computer to solve a problem: an “efficient” solution is one such that the amount of time to find it increases polynomially with the size of the problem — that is, if the problem size is n, then the amount of time it takes to solve the problem is nc, where c is any constant number. “Hard” problems are ones for which the best solutions take an exponential amount of time to find, for example, 2n

Another paper3 accepted to QIP, by IBM’s Sergey Bravyi and Pawel Wocjan, and Anirban Chowdhury and David Gosset from the University of Waterloo, studies the computational complexity of simulating quantum systems in thermal equilibrium, that is, when the system can exchange energy with a heat bath. Depending on the bath temperature, the state of such system may be spread over many energy levels, and the problem is to calculate statistical properties of this system, such as the average energy and the entropy. 

The University of Waterloo is an academic partner in the IBM Quantum Network.

This problem is known to be a particularly hard one, both for classical and quantum computers. However, so far nobody has been able to quantify its hardness in terms of the known complexity classes. Gosset, et al.’s paper makes the first step in this direction by showing that simulating a system of qubits with two-qubit interactions in the thermal equilibrium is as hard as the “Quantum Approximate Counting” problem: given a quantum computation that either accepts or rejects any input, the problem is to approximately count the number of accepted inputs.

In other words, a subroutine solving the quantum simulation problem can be efficiently converted into a subroutine solving the Quantum Approximate Counting problem and vice versa — but unfortunately, the Quantum Approximate Counting problem is an especially hard one with no known efficient quantum solution. In addition,  the authors find that certain quantum systems where each qubit can talk to any other qubit are easy to simulate; all properties of the thermal equilibrium state for such systems can be efficiently calculated on a classical computer. 

Many hard problems that can be tackled using quantum computers, such as factoring, still fall into the realm of the NP complexity class — those for which we don’t yet know how to find a solution classically in polynomial time, but we know how to check whether a candidate solution is a correct one, which includes practically important optimization problems with applications in finance, logistics, etc. (Note: It’s unlikely that a quantum computer will ever be able to produce solvers to generate exact solutions to all NP problems.)

Still, researchers are setting their sights even further, toward problems that are believed to be outside NP. Those kinds of problems would be beyond a state-of-the-art classical computer’s ability to either solve or verify a solution in polynomial time. One example of such problem is Forrelation. This task is among the simplest ones for a quantum computer — but is extremely difficult for a classical computer, and might have the biggest separation between classical and quantum complexity. 

Forrelation, as defined by Scott Aaronson and Andris Ambainis in Forrelation: A Problem that Optimally Separates Quantum from Classical Computing, is “where one needs to decide whether one Boolean function is highly correlated with the Fourier transform of a second function.”

In another paper4 accepted to QIP, Bravyi, Gosset, and Daniel Grier, and Luke Schaeffer from the University of Waterloo set out to understand just how hard Forrelation is for a classical computer, and what potential Forrelation has in useful quantum computation. The paper gives an improved exponential-time classical algorithm solving the Forrelation problem, and identifies new special cases of the problem that admit a polynomial-time classical algorithm.

The paper also discovers a surprising connection between the Forrelation problem and the well-known Quantum Approximate Optimization Algorithm (QAOA) for finding approximate solutions of optimization problems, for which finding an exact solution would be as hard as the hardest NP problems. The authors exploit this connection to benchmark the performance of QAOA for medium-size optimization problems with about 100 variables. This provides a new algorithmic tool to benchmark the performance of near-term quantum computers, and to figure out where the abilities of classical computers end — and where the abilities of quantum computers begin.

QAOA is a well-known algorithm for finding approximate solutions to combinatorial-optimization problems.

Another area IBM Quantum physicists are laying theoretical groundwork is in quantum communication — sending quantum information between two points, such as when linking quantum processors together to perform larger computations, or when trying to send quantum-encrypted messages. In the abstract, we think of these communication pathways as “channels,” or places where information goes in, travels, and interacts with the environment along the way, and then goes out, like a telecommunications cable or a water slide. It’s easy to calculate how much information we can send through a classical channel; we might call this the bandwidth for the cable, or the maximum occupancy for the slide.

Capacities of quantum channels are much harder to calculate. At the same time, it’s an important property to know; if you don’t know the capacity of a channel and try to send private information over it, that information might leak out into the environment, or worse, to someone attempting to eavesdrop on it.

However, there are still important statements we can make about the quantum capacity of different channels before we know how to actually calculate that capacity, as explored5 by Felix Leditzky from the University of Illinois at Urbana-Champaign, Debbie Leung from the University of Waterloo, Graeme Smith from JILA, and Vikesh Siddhu and IBM’s John Smolin. The team devised the idea of “platypus channels,” so named because, like a platypus, they consist of different parts — in this case, channels — following seemingly different rules combined into one.

Their work demonstrates that these platypus channels display superadditivity, meaning that combined with other simple channels, you can achieve capacities higher than the simple sum of the channels’ capacities — kind of like how a seed mixed with water doesn't just create a wet seed, but causes a plant to grow. Perhaps one day, the superadditivity property of these platypus channels will allow us to send more information by thoughtfully combining existing pieces of hardware, rather than having to develop new hardware. But most importantly, the paper helps hone our core intuition about quantum and classical channels — that quantum channels really do exhibit behaviors that classical channels do not.

Theory continues to serve as a central component of IBM Quantum’s mission, and our theorists are always working to generate the deepest understanding of quantum so we can produce the most robust quantum hardware. The inclusion of so many of our papers in the QIP conference not only demonstrate this commitment to theory — but also, that this commitment is paying off and generating truly ground-breaking thought in the field of quantum information.

Learn more about:

Physical Sciences: Our physical sciences researchers seek to understand materials and their properties through advanced characterization methods, explore novel devices and materials for new computational platforms, and apply physics to improve AI algorithms and interpretability.


References

  1. Christophe Piveteau, David Sutter, Sergey Bravyi, Jay M. Gambetta, Kristan Temme. Error mitigation for universal gates on encoded qubits. Phys. Rev. Lett. 127, 200505. 12 November 2021.

    |
  2. Guanyu Zhu, Tomas Jochym-O’Connor, Arpit Dua. Quantum codes, Topological Order, and Quantum Computation on Fractal Geometries. arXiv. [Submitted on 30 Jul 2021 (v1), last revised 27 Aug 2021 (this version, v2)]

    |
  3. Sergey Bravyi, Pawel Wocjan, Anirban Chowdhury, David Gosset. On the complexity of quantum partition functions. arXiv. [Submitted on 29 Oct 2021]

    |
  4. Sergey Bravyi, David Gosset, Daniel Grier, Luke Schaeffer. Classical algorithms for forrelation. arXiv. [Submitted on 13 Feb 2021 (v1), last revised 31 Oct 2021 (this version, v2)]

    |
  5. Felix Leditzky, Debbie Leung, Graeme Smith, Vikesh Siddhu, John Smolin. The platypus of the quantum channel zoo. arXiv. [Submitted on 16 Feb 2022].

    |

Quantum starts here