Thinking inside the blocks: Fighting faults with a logical instruction set

How large does a useful quantum computer need to be?

This is an important but surprisingly hard question to answer. When you’re building a useful, fault-tolerant machine you really need to know what kind of resources you should be targeting. The simple qubit and gate count metrics you can find from a textbook, though popular in the NISQ regime, are woefully inadequate for estimating fault tolerant hardware requirements.

This disconnect exists because existing descriptions of algorithms very rarely dive into the vast machinery of fault tolerance. Every logical qubit or perfect gate assumed by standard algorithm descriptions hides the orchestration of many thousands of physical qubits. Estimating the actual resources needed for useful quantum algorithms means a case-by-case and laborious fleshing out of these fault tolerant implementation nuances, a process that can be needlessly diverse across different hardware platforms.

In fact, this task is currently so involved that embarrassingly few algorithms have been properly fault tolerantly compiled. This is why we developed the logical blocks framework [1]. This framework provides a lego-like suite of building blocks for fault tolerantly implement an algorithm. Such a plug-and-play toolset removes one of the biggest hurdles to calculating resource requirements for useful applications. Moreover, it also lets us design much more resource efficient logical gates.

Cracking the stack

Few quantum algorithms are properly, fault tolerantly, assessed because of the very involved compilation process it entails. Compiling is a familiar idea from conventional computing, where a relatively short set transformations convert human-written programs into machine-ready instructions (e.g. high-level language → assembly code → machine code). This structured set of transformations is known as a compilation stack.

Quantum computers also utilize a compilation stack, reducing (just about) human-readable circuit diagrams[1] into hardware-specific instructions. But whereas classical hardware is remarkably resilient to noise, a useful quantum computer’s compilation stack must also incorporate steps for implementing fault tolerance, as shown in Figure 1.

Compilation stack for a fault tolerant quantum computer

Figure 1: Example compilation stack for a fault tolerant quantum computer which implements fault tolerance mostly in the green region. Note that further hardware-specific compilation steps not shown to the right. Our logical block formalism greatly simplifies the “logical instruction set” step highlighted in yellow.

[1] There are many projects attempting to introduce more friendly quantum programming languages to act as a starting point for this compilation process, but it is still common to specify algorithms in more technical circuit diagrams.

Properly estimating the physical hardware resources required by a quantum algorithm means running it through these compilation steps. A large part of why this is such a time-consuming and bespoke process is because of the tricky fault-tolerant instruction compiler stage, which is where our logical blocks formalism lends a hand. 

Traditionally this stage differs widely between quantum computing approaches, and how they model computations. Circuit-based computations (popular with matter-based qubits) express logic in terms of time sequences of stabilizer measurements. Measurement-based approaches employ patterns of single qubit measurements on special resource states. Fusion-based quantum computing (well suited to photonics) implements configurations of fusion measurements, whilst floquet-type schemes use different schedules of measurement patterns. 

This is a bewildering array of compilation routes to juggle. The logical blocks framework consolidates them by exploiting a common underlying structure, providing a platform-independent, ‘natural’ way of constructing fault-tolerant instructions.

By reducing these instructions to their most fundamental geometric structure, logical blocks defer hardware-specifics until the very end of the compilation stack. This vastly reduces the complexity of resource estimation and allows easier integration of estimates between different quantum computing approaches.

The logical blocks formalism is also flexible enough to work with different approaches to quantum error correction. To give a concrete example here we outline its use with one very popular error correction protocol – the surface code. We’ll sequentially follow how the three fault-tolerant instruction compiler steps from Figure 1 can be implemented with the surface code, allowing us to explicitly see how the logical blocks formalism contributes.

Fault-tolerant instruction compilation step 1: Logical qubit encoding

The first compilation step for implementing fault tolerance redundantly encodes error-protected logical information within many noisy physical qubits. In the surface code a single logical qubit is encoded in d2 physical qubits, conceptually arranged into a square lattice, as shown in Figure 2.


 [1] introduces a more general object known as a logical block template which describes logical instruction sets for general fault-tolerant channels instruments independently of specific topological code choices. The logical blocks we discuss here are how logical block templates appear for planar surface code patches using lattice surgery (described shortly).

 Whether these d2 physical qubits are physically arranged in a lattice or not depends on hardware constraints.


 
Surface code patch arrangement diagram

Figure 2: Surface code patch arrangement of physical qubits to encode one single logical qubit. The d2 physical qubits are shown as white circles (here d=5). Parity check measurements between groups of 4 physical qubits (2 at the patch edges) are shown by colored squares. Parity checks measuring Z operators are referred to as ‘primal’ and can be used to correct bit flip errors. Those involving X operators are called ‘dual’ and can be used to correct phase-flips.

 

The first thing to note about the encoding shown in Figure 2 is how it allows us to detect errors. In traditional circuit-based approaches to fault tolerance, the colored squares in Figure 2 represent a series of multi-qubit parity check measurements each of which is performed on the four physical qubits at its corners (two for checks at the edges). By inspecting these measurement outcomes throughout the computation, a decoder algorithm running alongside the quantum computation can infer the location of errors. Note that the surface-code parameter d is actually the code distance, which determines how many errors can be reliably detected and corrected. Increasing d increases the number of errors we can deal with.

We represent a logical qubit in the patch of d2 qubits from Figure 2 by making use of an important symmetry. Namely, that switching all primal and dual squares shouldn’t change the structure of the surface code. A logical qubit is encoded in topological features which break this symmetry. Though this definition sounds exasperatingly abstract, the key point is that such topological features are in some sense ‘global’ properties of the d2 qubits, and so errors on individual physical qubits are very unlikely to conspire to irrevocably disturb them. In that sense, information encoded within these features is resilient to common errors (those typically localized to small clusters of physical qubits). Equally importantly, it also remains undisturbed by the parity check measurements we use to locate and fix any errors. One topological defect commonly used to represent logical qubits are the corners of a surface code patch (where the boundary squares change from primal to dual – breaking the primal/dual symmetry), but other possibilities also exist.

Fault-tolerant instruction compilation step 2: Logical gate encoding

Knowing how to encode logical qubits as surface code patches is a good start, but on its own this just gives us a way to protect some initial logical state from errors – a kind of quantum memory. To do a computation we also need to know how to perform fault-tolerant logical operations on these surface code patches.

It turns out we can implement single-qubit gates fault tolerantly by simply performing some particular prescription of (potentially noisy) gates constrained to the d2  physical qubits within a patch.

However, performing fault-tolerant multi-qubit operations (involving more than one surface code patch) is a little more involved. This can be achieved by a process called lattice surgery. Lattice surgery fault-tolerantly realize gates like the CNOT, which acts between different logical qubits. This is achieved by performing joint operations between physical qubits along the boundaries of different surface code patches.

Performing lattice surgery means being able to interact the boundary qubits from different surface code patches – and so it’s crucial for us to decide how our logical qubit surface patches will be physically laid out in our computer. For some hardware it might be difficult to perform operations between the boundaries of patches if they’re physically far apart. This decision is made in the logical architecture compilation step (refer to Figure 1). A particularly simple logical architecture is what we refer to as the baseline architecture, shown in Figure 3. This uses a large ancilla register patch to mediate lattice surgery operations between the patch boundaries of an adjacent register of logical qubits.

Baseline architecture for laying out surface code diagram

Fault-tolerant instruction compilation step 3: A logical instruction set

Having specified how to encode logical qubits (planar surface code patches), and the primitive operations for fault tolerantly perform operations (most importantly, lattice surgery), we have all the ingredients necessary for performing fault-tolerant computations.

Though impractical at scale, this baseline architecture is suited to hardware with locality restrictions which might prevent otherwise interacting surface code patches that are not immediately colocatedcollocated (often the case for matter-based qubits). The capabilities of optical fiber allow photonic approaches to consider a more sophisticated, non-local, active volume architecture [2]. In this blog post we restrict ourselves to the simpler baseline architecture purely for pedagogical simplicity.

But lattice surgery is a very granular level at which to have to think about a computation. Having to break an algorithm down to this level just to get hardware resource estimates is unrealistic. Not only is this time consuming, but it obscures insights into how logical information flows through a fault-tolerant algorithm, and how we might streamline it. Moreover, thinking of computations in the way shown in Figure 4 restricts us to a two-dimensional flatland of surface code patches. In reality, current logical architectures exist in a three-dimensional spacetime. This restriction limits our ability to understand and optimize how algorithms perform at the logical level.

The solution is a logical instruction set – a set of commonly used, core lattice surgery functionalities, pre-packaged for immediate and repeated usage.

Breaking out of flatland: Compiling with logical blocks

The logical blocks framework is such an instruction set. Each logical block is a specification of how to manipulate surface code patches in space and time (subject to any logical architecture constraints). The small set of possible logical blocks are composed to reproduce the information flow of any fault-tolerant computation.

We can visualize logical blocks by extending the picture of surface code patches from Figure 4 to include time as well as space. Figure 5 shows the qubit and ancilla registers from Figure 4 in an isometric perspective emphasizing that this is only a single ‘time slice’ of a more complete spacetime picture which would include further lattice surgery operations as time progresses.

Diagram of lattice surgery

[5] Primal and dual boundaries are two ways we can terminate a surface code patch. In terms of the surface code time-slices, they correspond to the two different patterns of two-body parity check measurements from Figure 2.

[6] Although we have considered the time dimension in Figure 6 to point bottom to top, it is convenient to work with the abstraction that no dimensions have been singled out as spatial and temporal. This is because some fault-tolerant hardware frameworks (such as Fusion- Based Quantum Computing) allow an interconversion between space and time resources. More optimal logical operations can be discovered by focusing solely on logical information flow independent of architectural space versus time choices.

Lattice surgery operations diagram

It is in this three-dimensional spacetime picture that we define our logical blocks instruction set, by decomposing the three-dimensional structure motifs into the primitive elements or ‘blocks’ shown in Figure 7.

Logical block decomposition diagram

Figure 7: Logical block decomposition of a lattice surgery spacetime diagram. Dashed lines demarcate where logical blocks will connect. Note that sizes of the blocks in the vertical direction are determined by the code distance used. Though they are often chosen to be the same, it may be advantageous for them to be different depending on the hardware and error model. Sizes shown in this figure are only illustrative, see [1] for more details.

Segmenting an existing lattice surgery spacetime diagram in terms of logical blocks isn’t very useful. But in reality, we would use to the logical block formalism to completely circumvent lattice surgery considerations altogether. The logical blocks toolset is our starting point for piecing together the pipework-like flow of logical information required by an algorithm.

Figure 8 lists some more key logical blocks that can be used for building fault tolerant algorithms. Precisely which blocks are needed will depend on which particular fault-tolerant gates we are using to represent algorithms within our compilation stack. In [1] we also provide instructions on how to compile each of these logical blocks into physical instructions (which we refer to as a quantum instrument network), suitable for any hardware platform. Orchestrating the underlying physical operations of a quantum instrument network via logical blocks allows us to more readily ensure our computation is properly fault tolerant and identify optimization opportunities.

Figure 8: Example logical blocks that can be used to build an instruction set. Primal (dual) boundaries and membranes are shown in blue (red). Domain walls are shown in green. Input and output ports (corresponding to surface code patches going into or coming out of the logical operation) are shown hatched. We refer the reader to [1] for further explanation of each logical block and the role of domain walls.

Each logical block has its own volume – determined from its physical footprint and operation time. Thus we can adopt a simple, reductionist approach estimating the resources needed by larger logical operations or whole algorithms. We simply enumerate the volumes of the logical blocks we need to obtain a physical space and time overhead for fault tolerantly running an algorithm. This has allows us to streamline a number of compilation considerations – for example, for the first time we can easily simulate how gates perform for different choices of what value we use for the code distance – d (see Figure 2). This allows us to make an informed choice for the value of d. 

Moreover, this simple representation of fault tolerant algorithms allows us to quickly develop intuitions for how information is flowing within an algorithm and which areas are computationally expensive. For example, our recent work on a modular decoder [3] was made possible by using the framework to explore how the important task of decoding can be segmented along the lines of logical blocks.

The logical blocks formalism is a Rosetta stone for understanding how quantum applications behave in practice, unifying resource estimation for a wide range of fault-tolerant hardware. This simplifies assessing and comparing the only benchmark that really matters – whether hardware will run useful applications. We’ve already made use of the framework to guide our early generation photonic machines and used its insights to significantly reduce hardware requirements – for example, by making more efficient logical gates. We’ve also developed an intriguing new architecture where logic is realised by teleporting twist defects (that are like the corners of the surface code) forward in time. But that’s the topic of another post…

Bibliography

[1] H. Bombin, C. Dawson, R. V. Mishmash, N. Nickerson, F. Pastawski, and S. Roberts, Logical Blocks for Fault-Tolerant Topological Quantum Computation, arXiv:2112.12160.

[2] D. Litinski and N. Nickerson, Active Volume: An Architecture for Efficient Fault-Tolerant Quantum Computers with Limited Non-Local Connections, arXiv:2211.15465.

[3]H. Bombín, C. Dawson, Y.-H. Liu, N. Nickerson, F. Pastawski, and S. Roberts, Modular Decoding: Parallelizable Real-Time Decoding for Quantum Computers, arXiv:2303.04846.

Previous
Previous

DARPA advances PsiQuantum to Second Phase of Utility-Scale Quantum Computing Program

Next
Next

PsiQuantum targets first commercial quantum computer in under six years