The IBM Quantum heavy hex lattice
As of August 8, 2021, the topology of all active IBM Quantum devices will use the heavyhex lattice, including the IBM Quantum System One's Falcon processors installed in Germany and Japan.
As of August 8, 2021, the topology of all active IBM Quantum devices will use the heavyhex lattice, including the IBM Quantum System One's Falcon processors installed in Germany and Japan.
As of August 8, 2021, the topology of all active IBM Quantum devices will be based around the heavyhex lattice. The heavyhex lattice represents the fourth iteration of the topology for IBM Quantum systems and is the basis for the Falcon and Hummingbird quantum processor architectures. Each unit cell of the lattice consists of a hexagonal arrangement of qubits, with an additional qubit on each edge.
The heavyhex topology is a product of codesign between experiment, theory, and applications, that is scalable and offers reduced errorrates while affording the opportunity to explore error correcting codes. Based on lessons learned from earlier systems, the heavyhex topology represents a slight reduction in qubit connectivity from previous generation systems, but, crucially, minimizes both qubit frequency collisions and spectator qubit errors that are detrimental to realworld quantum application performance.
In this tech report, we discuss the considerations needed when choosing the architecture for a quantum computer. Based on proven fidelity improvements and manufacturing scalability, we believe that the heavy hex lattice is superior to a square lattice in offering a clear path to quantum advantage, from enabling more accurate nearterm experimentation to reaching the critical goal of demonstrating fault tolerant error correction. We demonstrate that the heavyhex lattice is equivalent to the square lattice up to a constant overhead, and like other constant overheads such as choice of gate set, this cost is insignificant compared to the cost of mapping the problem itself.
IBM Quantum systems make use of fixedfrequency qubits, where the characteristic properties of the qubits are set at the time of fabrication. The twoqubit entangling gate in such systems is the crossresonance (CR) gate, where the control qubit is driven at the target qubit’s resonance frequency. See Fig. 1 for the layout of control and target qubits in the heavyhex lattice. These frequencies must be offresonant with neighboring qubit transition frequencies to prevent undesired interactions call “frequency collisions.”
The larger the qubit connectivity, the more frequency conditions must be satisfied, and degeneracies amongst transition frequencies become more likely. In addition, due to fabrication imperfections, require disabling an edge in the system connectivity (e.g. see Penguin v1 and v2 in Fig. 2)—all of which can effect device performance and add hardware overhead.
A similar set of frequency collisions appears in fluxtunable qubits as avoidedcrossings in implementing flux control.^{1} Moreover, tunable qubits come at the cost of introducing flux noise which will reduce coherence, and the flux control adds scaling challenges to larger architectures with increased operational complexity in qubit tuneup and decreased gate fidelity caused by pulse distortions along the flux line.^{2}
As shown in Fig. 3, the decrease in qubit connectivity offered by the heavyhex lattice, as well as the selected pattern of control and target qubit frequencies, gives an order of magnitude increase^{3} in zerofrequency collision yield as compared to other choices for system topology.
The sparsity of the heavy hex topology with fixed frequency qubits also improves overall gate fidelity^{4} by limiting spectator qubit errors: errors generated by qubits that are not directly participating in a given twoqubit gate operation. These errors can degrade system performance and do not present themselves when the gate is performed in isolation; one and twoqubit benchmarking techniques are not sensitive to these errors.
However, the spectator errors matter severely when we run circuits. The rate of spectator errors is directly related to the system connectivity. The heavyhex connectivity reduces the occurrence of these spectator errors by placing the control qubit on only those edges connected to the target qubits (Figure 1).
Figure 4 shows the average CNOT error rates for four generations of Penguin quantum processor along with those of the Falcon and Hummingbird families that utilize the heavyhex topology. The reduction in frequency collisions and spectator errors allow for devices to have better than 1 percent average CNOT error rate across the device, and isolated twoqubit gates approaching 0.5 percent. Additional techniques for improving spectator errors are given in Ref.^{5} This represents a factorofthree decrease compared to the error rates on the best Penguin device with a square layout.
Quantum Volume (QV) is a holistic, hardwareagnostic quantum system benchmark that encapsulates system properties such as the number of qubits, connectivity, as well as gate, spectator errors, and measurement errors into a single numerical value by finding the largest square circuit that a quantum device can reliably calculate.^{6}
Higher quantum volumes directly equate to higher processor performance. Gate errors measured by single or twoqubit benchmarking do not reveal all errors in a circuit, for example crosstalk and spectator errors, and estimating circuit errors from the gate errors is nontrivial. In contrast, QV readily incorporates all possible sources of noise in a system, and measures how good the system is at implementing average quantum circuits. This allows one to find the best system to run their application.
Figure 5 shows the evolution of Quantum Volume over IBM Quantum systems, demonstrating that only heavyhex based Falcon and Hummingbird systems can achieve QV32 or higher. Parallel improvements in gate design, qubit readout, and control software, such as those in Ref.,^{5} also play an important role in increasing QV values faster than the anticipated yearly doubling.
Development of quantum error correcting codes is one of the primary areas of research as gate errors begin to approach faulttolerant thresholds. The surface code, implemented on a square grid topology, is one such example of this. However as already discussed, and experimentally verified,^{7} frequency collisions are common in fixedfrequency qubit systems with square planar layouts. As such, researchers at IBM Quantum developed a new family of hybrid surface and BaconShor subsystem codes that are naturally implemented on the heavyhex lattice.^{8}
Similar to the surface code, the heavyhex code also requires a fourbody syndrome measurement. However, the heavyhex code reduces the connectivity by implementing a degree four node with two degree three nodes as presented in Fig. 6.
The connectivity of other lattices, such as the square lattice, can be simulated on the heavyhex lattice with constant overhead by introducing swap operations within a suitably chosen unit cell. The vertices of the desired virtual lattice can be associated to subsets of vertices in the heavyhex lattice such that nearestneighbor gates in the virtual lattice can be simulated with additional twoqubit gates.
Taking the square lattice as an example, there are a variety of ways to associate a unit cell of the heavyhex lattice to the square lattice. If we draw the hexagons as 3x5 rectangles, one natural choice places the qubits of the square lattice on the horizontal edges of the heavyhex lattice, see Figure 7.
Let's choose the goal of applying an arbitrary twoqubit gate, U(4), between each neighboring qubit in the virtual lattice. This can be accomplished with a constant overhead in depth 14, of which eight steps involve only swap gates. Each qubit individually participates in six swap gates. Alternatively, these swaps might be replaced by teleported gates, at potentially lower constant cost, if the desired interactions correspond to Clifford gates.
Other mappings exist as well and expose new possibilities for tradeoffs and optimizations. For example, an interesting alternative mapping encodes the qubits of the square lattice into 3qubit repetition codes on the left and right edges of each 3x5 rectangle (Figure 8, Left). This creates an effective heavysquare lattice where encoded qubits are separated by single auxiliary qubit (Figure 8, Right). In this encoding we can apply diagonal interactions in parallel along the vertical or horizontal direction of the heavysquare lattice. Since swaps occur in parallel between these two rounds of interactions, the total depth is only two rounds of swaps and two rounds of diagonal gates.
There are relatively simple circuits for applying singlequbit gates to the repetition code qubits whose cost is roughly equivalent to a swap gate. Since none of these operations is necessarily faulttolerant, the error rate will increase by as much as a factor of three, but postselection can be done for phase flip errors while one takes advantage of the gains from fact that the code itself corrects a single bit flip error. As mentioned, the cost of the encodings described above is a constant and is thus on equal footing with other constant overheads such as the choice of gate set used. These should be compared with the cost of mapping the problem itself to the quantum computer, which might have a polynomial overhead.
We have demonstrated why the IBM Quantum heavyhex lattice is a promising platform for quantum computation, both today and tomorrow. This topology has proven its worth, both in terms of scalability and performance by improving device error rates. These improvements are captured in system benchmarks such as Quantum Volume that continue to increase at an exponential rate. The connectivity of the heavyhex lattice can be mapped onto other canonical lattices such as the square lattice with constant cost overhead; it is negligible compared to encoding costs.
Additionally, improvements in gate errors offered by the heavyhex lattice will significantly offset the overhead from additional mapping gates. We are confident that this platform will allow continuing improvements in both device fidelity and size and will beat square lattices in the race to quantum advantage.
References

R. Versluis, S. Poletto, N. Khammassi, B. Tarasinski, N. Haider, D. J. Michalak, A. Bruno and K. D. L. Bertels, "Scalable Quantum Circuit and Control for a Superconducting Suerface Code," Phys. Rev. Applied, vol. 8, p. 034021, 2017. ↩

J. M. Gambetta, J. M. Chow and M. Steffen, "Building logical qubits in a superconducting quantum computing system," npj Quantum Information, vol. 3, p. 2, 2017. ↩

Jared B. Hertzberg, Eric J. Zhang, Sami Rosenblatt, et al., "Laserannealing Josephson junctions for yielding scaledup superconducting quantum processors," arXiv:2009.00781. ↩

Takita, M., Cross, A. W., Córcoles, A. D., Chow, J. M. & Gambetta, J. M. Experimental Demonstration of FaultTolerant State Preparation with Superconducting Qubits. Phys. Rev. Lett. 119, (2017). ↩

N. Sundaresan, I. Lauer, E. Pritchett, E. Magesan, P. Jurcevic and J. M. Gambetta, "Reducing unitary and spectator errors in cross resonance with optimized rotary echoes," arXiv:2007.02925, 2020. ↩ ↩^{2}

A. W. Cross, L. S. Bishop, S. Sheldon, P. D. Nation and J. M. Gambetta, "Validating quantum computers using randomized model circuits," Phys. Rev. A, vol. 100, p. 032328, 2019. ↩

A. D. Córcoles, E. Magesan, S. J. Srinivasan, A. W. Cross, M. Steffen, J. M. Gambetta and J. M. Chow, "Demonstration of a quantum error detection code using a square lattice of four superconducting qubits," Nat. Commun., vol. 6, p. 6979, 2015. ↩

C. Chamberland, G. Zhu, T. J. Yoder, J. B. Hertzberg and A. W. Cross, "Topological and Subsystem Codes on LowDegree Graphs with Flag Qubits," Phys. Rev. X, vol. 10, p. 011022, 2020. ↩