Skip to main contentIBM 

How IBM Quantum is advancing quantum error correction with hardware experiments

A future where we correct the errors innate to computing with such sensitive hardware is where quantum computers reach their full potential. And the IBM Quantum team is making great strides with hardware-aware error correction experiments, today.

How IBM Quantum is advancing quantum error correction with hardware experiments

7 Apr 2022

Maika Takita

Theodore J. Yoder

Watch “Making quantum error correction more practical” with Dr. Maika Takita, where she details her work on hardware-aware code design and the experimental results of these efforts to cut down on the errors in quantum computation.

In just the last year, our team has presented several notable papers demonstrating error correction implemented on real quantum hardware. These experiments aren’t just bringing error correction closer to reality; each one teaches us more about our hardware and what causes errors than merely theory alone. While we still have our work cut out for us, performing studies such as these helps us chart the course of quantum processor development and ensures that we’ll be ready to build fault-tolerant processors once our hardware has matured. After all, error correction and hardware development aren’t silos — here at IBM Quantum, we actively co-design our error-correcting codes alongside our ever-advancing hardware roadmap.

So, why do we need quantum error correction? Every step of quantum computation, and even the outside environment, has the potential to introduce errors and cause the processor to lose quantum information. Merely building redundancies into the computation is out of the question, too — the theory of quantum mechanics tells us that it’s impossible to duplicate an unknown quantum state.

At IBM, we develop bottom-up approaches to the problem of noisy qubits and incorporate error correction techniques to realize this technology’s true potential. Read more.

Quantum errors are more complex than a classical computer’s errors, of course: classical errors consist only of bit flips, where bits might change from a 1 to a 0, but quantum bits can enter superposition, taking on combinations of 0 and 1 with an associated angle called a phase, which can also error. If you imagine a classical bit as a light switch, then a quantum bit, or qubit, is more like a dimmer whose switch can move to any position on a sphere; bit flip errors move the switch closer to the top or bottom than desired, while phase errors rotate the dimmer around the sphere.

Quantum errors are more complex than a classical computer’s errors.

Like classical error correction, quantum error correction begins by encoding information, or spreading quantum information over a system of multiple redundant qubits. Noise induces slight changes to the quantum state, and then we decode this information, or return a value that takes into account our knowledge of how an error would impact the encoded information. This decoding uses parity measurements, also called stabilizers, where we entangle helper quantum bits with the computational qubits, measure them to look for indirect signs of error, and then fix the error.

Provided that our qubits have an innate error rate below some threshold, then we can encode the qubits by building a structure of data and helper qubits that is impervious to errors. This code can take on different shapes, most notably the “surface code,” which looks like a checkerboard.

From square to heavy-hex lattice topology

While IBM Quantum once considered square lattice topologies and the accompanying surface code, we found that we could decrease frequency collisions by moving to a “heavy hex” code with decreased connectivity, with qubits arranged on the edges and vertices of hexagons in a tessellation. Our error correction research has changed tack in kind, with advances in hardware development and error correction playing off of each other. As we continue to evolve our hardware, error correction will continue to evolve alongside it.

As of August 8, 2021, the topology of all active IBM Quantum devices will use the heavy-hex lattice, including the IBM Quantum System One’s Falcon processors installed in Germany and Japan. Read more.

All of this is easier said than done, of course. So, our team runs experiments in these early days of error correction, encoding and decoding quantum information using IBM Quantum processors.

In one demonstration,1 published this month in Physical Review Letters, we demonstrated how we could use a [[4,1,2]] error detecting code to detect errors. This means that we used four physical qubits to represent one qubit’s worth of information in a code of distance two, where distance essentially measures the code’s robustness via how many errors it would take to return an incorrect logical qubit value. Each unit of the [[4,1,2]] detecting correcting code is an H shape, with data qubits each linked by a “flag” qubit to detect and fix bit flips, and a “syndrome” qubit in the center to detect and fix phase flips.

We first tested how well our implementation of the code could detect errors by performing post selection, or throwing out any experiment runs where the code told us there was an error. The rate of the remaining runs actually being the correct value — called the logical error rate, measuring the rate that the error detection actually worked — was one of the best measurements yet demonstrated for a distance-2 code. But tossing all of this data doesn’t allow us to perform any decoding, so we also devised a partial-postselection method which allowed us to experiment with a decoding scheme. We still measured logical error rates better than we would have without any post selection, while throwing out around five times less data compared with full postselection.

Implementing error correcting codes

Advances like mid-circuit measurement, including conditional reset of our qubits, are critical for developing new capabilities such as dynamic circuits and are also partly responsible for our being able to begin experimenting with error detection. Further advances to IBM Quantum hardware has allowed us to test2 even larger error correcting codes. While distance-2 codes are only capable of detecting errors, we recently demonstrated the ability to implement and test a distance-3 code, one with the ability to also correct errors, on our 27-qubit Falcon processor. This test allowed us to test a more accurate “maximum likelihood” decoder.

Read more about how to measure and reset a qubit in the middle of a circuit execution, here.

The distance-3 case is closer to a proof of concept than to full-blown error correction. However, it demonstrates the importance of running error correcting codes on real hardware: we determine that an important contribution to the error rate from leakage, or qubits taking on states even higher than 1. While removing these cases improve our logical error rate, studying them will also give us a path forward to further improve our error correcting schemes.

These experiments demonstrate the importance of performing real experiments implementing our error correcting codes onto quantum hardware. Not only can they help further drive theory, but they can tell us what kinds of coding and decoding schemes work best for the hardware that we're constructing while demonstrate what kinds of errors we should expect in the real world. These experiments are even bringing about new considerations that theory may not have anticipated.

Now that we’re able to implement error correction, the lion’s share of the work remains in improving it so that once our qubits have low enough error rates, error correction schemes are ready to take us the rest of the way to fault tolerance. As the IBM Quantum team rolls out larger devices, we’ll need to implement codes with increasingly large distances — and increasingly robust error correction.

And, in the meantime, the theory team is continuing to think about whether we can begin to pursue quantum applications before our hardware is ready for error correction. This includes strategies to mitigate errors, or to combine error mitigation with partial error correction as detailed3 in our team’s 2021 Physical Review Letters paper.

While we are making progress in advancing error correction techniques, the current hardware is fully exploiting error mitigation, which will bring us close to quantum advantage. The future will be a smooth transition from error mitigation to error correction. Quantum error correction is a work in progress — and the progress that our team has made to date is impressive. We’re optimistic that this work will continue to push us along our hardware roadmap toward full-scale quantum computation.

Making quantum error correction more practical

Learn more about:

Quantum Error Correction: We develop bottom-up approaches to the problem of noisy qubits and incorporate error correction techniques to realize this technology’s true potential.


References

  1. Chen, E., Yoder, T., Kim, Y., et al. Calibrated Decoders for Experimental Quantum Error Correction. Phys. Rev. Lett. 128, 110504 – Published 17 March 2022

    |
  2. Sundaresan, N., Yoder, T., Kim, Y., Matching and maximum likelihood decoding of a multi-round subsystem quantum error correction experiment. arXiv. [Submitted on 14 Mar 2022].

    |
  3. Piveteau, C., Sutter, D., Sergey Bravyi, S., et al. Error Mitigation for Universal Gates on Encoded Qubits. Phys. Rev. Lett. 127, 200505 – Published 12 November 2021

    |

View pricing