We recently announced a new approach to vastly increasing the efficiency of running quantum algorithms. We call it the Active Volume Architecture. The key insight is that if you have access to certain hardware capabilities then you can obtain remarkable reductions in the running costs of commercially useful quantum algorithms (for example, reducing running costs by around 50x for factoring algorithms).
To explain how the Active Volume Architecture provides these improvements we first need to be a bit more specific about which costs of a quantum computation we’re trying to improve on in the first place.
Quantum computing, but at what cost?
When it comes to commercially relevant algorithms (such as chemical simulations, or factoring) we normally divide our thinking of computational cost in terms of space and time. In other words we typically ask how many qubits will we need to run the algorithm, and how long will we need to perform operations on them for?
But researchers in fault-tolerant quantum computing prefer working with a single aggregate metric, a computational spacetime volume – which quantifies both physical resources and runtime within a single number. As the name suggests, this metric is found by multiplying the space (number of qubits) and time (i.e. number of computational cycles) involved in a computation. But when we're dealing with fault tolerant quantum computations the metric becomes a little more nuanced. More specifically we need to multiply the number of logical qubits and the number of logical cycles involved in running an algorithm if we want to find its fault-tolerant spacetime cost. Here a logical qubit refers to one free from the inherent errors plaguing quantum systems – one that is ‘ready to reliably perform logic’. Logical qubits obtain their hallowed purity by being distilled from many thousands of noisier physical qubits. And a logical cycle is the length of time it takes for our pristine logical qubits to undergo a robust fundamental logical operation.
Thinking of spacetime volume is more insightful than the reductive qubit and gate counts often bandied about when discussing the quantum computing power of smaller NISQ devices. This is because of the existence of clever tricks allowing us to trade off space against time in a linear fashion[1] . If a 1000-logical-qubit device can finish a specific quantum computation in 1 hour, then it is also possible to re-configure the same device to finish in just 30 minutes - albeit now requiring 2000 logical qubits to do so. Or if time is really at a premium, you could sacrifice 4000 logical qubits to get things done in just 15 minutes. What this means is that precisely how you proportion the cost of a computation between time and space is flexible – and not as important as the total overall spacetime cost[2].
Compiling
Precisely what spacetime cost a quantum algorithm accrues depends heavily on how it is compiled. Compilation is the act of taking the abstract recipe described by a quantum algorithm and expressing it in the more restricted instruction set understood by a particular piece of quantum computing hardware.
Only once we see the algorithm expressed this way - in terms of actual hardware usage - can we begin to really assess the spacetime volume it needs to run. Clearly compilation, and thus spacetime costs, depend very much on the specific capabilities of the hardware architecture we’re compiling down to.
The cost of connectivity
One capability of a hardware architecture having an especially large effect on compiling costs is the arrangement of logical qubits; how easily different logical qubits can interact with each other.
A typically assumed architecture in the literature is one where logical qubits are arranged on a 2D square grid and are only allowed to undergo two-qubit operations with their nearest neighbors. Despite its obvious locality constraints, such an architecture can still be used to implement multi-qubit operations between non-nearest-neighbor qubits, but this involves the use of many "intermediary" qubits to establish connections. Compiling to this particular architecture incurs a heavy spacetime cost penalty to realize the longer-range connectivity needed by many useful algorithms.
This kind of 2D-local architecture is commonly assumed in the architecture because it matches the on-chip arrangement which matter-based qubits (such as superconducting qubits) are confined to. But this confinement comes with a large cost. It turns out that the intermediary qubits contribute a large portion of the spacetime volume costs for algorithms that are compiled to them - even though these qubits do not directly contribute to computations! In fact the irreducibly essential “active volume” of a computation is often only a small fraction of the total spacetime cost in 2D-local architectures.
Breaking out of the plane
Eliminating the inactive volume in these commonly assumed architectures would reduce spacetime volume costs of a wide range of quantum algorithms by orders of magnitudes. To do this means constructing architectures breaking the assumption of 2D-locality. This might sound overly ambitious – after all the 2D nature of commonly assumed architectures is based on seemingly reasonable assumptions of what’s possible on the geometry of a chip. But for hardware such as single-photon qubits such assumptions are unnecessary.
A photonic fusion-based quantum computer consists of modules that are connected via optical fiber links. Each module can be thought of as a logical qubit and consists of multiple chip-sized objects, so these links are macroscopic, and the modules certainly don't need to be constrained to a 2D-grid-like network. We can increase the connectivity in such a network by using photonic switches that allow us to select the neighbors of each module, thereby rearranging the network on the fly and avoiding the use of intermediary qubits to establish long-range connections.
Nevertheless, connectivity in photonic architectures is not boundless, and we certainly would want to avoid the extreme scenario of needing all-to-all connectivity between modules because it implies the use of very large and complicated switches - degrading computational performance. One of the key breakthroughs in the active volume architecture we’ve developed is that it ensures that each module need only be connected to a number of ‘neighbors’ scaling just logarithmically with the total number of modules in the network. In a network of thousands of modules, this boils down to a few dozen neighbors per module – something commensurate with our developments in photonic switching technology. This is certainly more sophisticated than 4 neighbors per module as in a 2D grid, but entirely manageable for photonics, unlike the thousands of neighbors needed for naïve all-to-all connectivity.
This addition of nonlocal connections (via optical fiber links) reduces the spacetime volume cost of quantum computations across the board. For example, we calculated that the volume cost of a 2048-bit factoring algorithm (the type of problem that is considered intractable for non-quantum computers) is reduced by a factor of 44 compared to a general-purpose architecture based on a 2D grid. It's worth highlighting that there exists work in the literature [Gidney paper] on special-purpose architectures based on 2D grids that manage to outperform general-purpose architectures based on 2D grids (although by much less than the non-local architecture). But these architectures are specific to a target application and not able to execute arbitrary quantum computations. Our Active Volume Architecture is general-purpose.
A factor of 50 is the difference between a computation that takes a year and a computation that takes a week. And using our results on interleaving – which leverage silicon photonic technology to easily trade off computational time and space resources – this reduction in volume is not limited purely to a speedup. It can also be used to reduce the size of quantum computer needed to address a given application[3], increasing the scope for early generation, fault-tolerant photonic quantum computers.
_______________________
[1] These tricks are especially simple to perform when working with photonic qubits and are the topic of our interleaving paper.
[2] It’s worth noting of course that there are limitations to this tradeoff. The so-called ‘reaction limit’ means that we cannot throw resources at a computation to make it infinitely fast!
[3] It should be noted of course that practical limits on interleaving’s ability to trade space and time costs are set by the loss rate of long optical delay lines.