Quantum error mitigation (QEM) and quantum error correction (QEC) are two research areas that are often considered as distinct entities, and the problem of combining the two approaches in a non-trivial way has only recently started to be explored. In this paper, we explore a paradigm at the intersection of the two, based on the error mitigation technique of Zero-Noise Extrapolation (ZNE), that uses the distance of an error correcting code as a noise parameter. This is distinct from some alternative approaches, as QEC is here used as a subroutine inside the QEM framework, while other proposals use QEM as a subroutine inside QEC experiments. Intuitively, we exploit the fact that a reduction in the physical noise level is analogous to an increase in the code distance, as both of them result in a decrease in the logical error rate. As such, the extrapolation to zero noise in the case of ZNE becomes comparable to the extrapolation to infinite distance in the case of this method. We describe how to calculate expectation values from a fault-tolerant computation, and we gain some analytical intuition for our ansatz choice. We explore the performance of the considered method to reduce the errors in a range of expectation values for a realistic circuit-level noise model and realistic device imperfections on the rotated surface code, and we particularly show that the performance of the method holds even in the case of non-stabiliser input states.
Understanding how isolated quantum many-body systems thermalize remains a central question in modern physics. We study the onset of ergodicity in a two-dimensional disordered Heisenberg Floquet model using digital quantum simulation on IBM's Nighthawk superconducting processor, reaching system sizes of up to $10\times10$ qubits. We probe ergodicity across different length scales by coarse-graining the system into spatial patches of varying sizes and introducing a measure based on the collision entropy of each patch, enabling a detailed study of when ergodic behavior emerges across scales. The high sampling rate of superconducting quantum processing units, together with an optimal sample estimator, allow us to access patches of sizes up to $3\times3$. We observe that as the Heisenberg coupling $J$ increases, the noiseless system undergoes a smooth crossover from subergodic to ergodic behavior, with smaller patches approaching their random-matrix-theory values first, thereby revealing a hierarchy across scales. In the region of parameter space where classical tensor-network simulations are reliable, small patches or small values of $J$, we find excellent agreement with the error-mitigated quantum simulation. Beyond this regime, volume-law entanglement and contraction complexity growth causes the cost of classical methods to rise sharply. Our results open new directions for the use of quantum computers in the study of quantum thermalization.
Developing quantum algorithms to simulate fluid dynamics has become an active area of research, as accelerating fluid simulations could have significant impact in both industry and fundamental science. While many approaches have been proposed for simulating fluid dynamics on quantum computers, it is largely unclear whether these algorithms will provide speedup over existing classical approaches. In this paper we give evidence that quantum computers cannot significantly outperform classical simulations of fluid dynamics in general. We study two models of fluids: the Korteweg-de Vries (KdV) equation, which models shallow water waves, and the incompressible Euler equations, which model ideal, inviscid fluids. We show that any quantum algorithm simulating the KdV equation or the Euler equations for time $T$ requires $\Omega(T^2)$ and $e^{\Omega(T)}$ copies of the initial state in the worst case, respectively. These lower bounds hold for the task of preparing the final state, and similar bounds hold for history state preparation. We prove the lower bound for the KdV equation by investigating divergence of solitons. For the Euler equations, we show that instabilities enable fast state discrimination.
The Clifford Hierarchy has been a central topic in quantum computation due to its strong connections with fault-tolerant quantum computation, magic state distillation, and more. Nevertheless, only sections of the hierarchy are fully understood, such as diagonal gates and third level gates. The diagonal part of the hierarchy can be climbed by taking square roots and adding controls. Similarly, square roots of Pauli gates (first level) are Clifford gates (climb to the second level). Based on this theme, we study gates whose square roots climb to the next level. In particular, we fully characterize Clifford gates whose square roots climb to the third level.
Are Gaussian measurements enough to distinguish between Gaussian states? Here, we tackle this question by focusing on the max-relative entropy as an operational distinguishability metric. Given two general multimode Gaussian states, we derive a condition, based on their covariance matrices, that completely determines whether or not there exists an optimal Gaussian measurement achieving the max-relative entropy. When the condition is satisfied, we find this optimal measurement explicitly. When the condition is not met, there is a strict gap between the distinguishability achievable by Gaussian measurements and the unconstrained max-relative entropy in which all measurements are allowed. We illustrate our results in the single-mode setting, and show examples of states for which this gap can be made arbitrarily large, revealing novel instances of Gaussian data hiding.
These notes are a short introduction to the mathematical theory of open quantum systems. They are meant to serve as an entry point into a broad research area which has applications across the quantum sciences dealing with systems subjected to external noise. The guiding idea is to let the key structures of the theory emerge from a concrete model. By working through the dissipative Jaynes-Cummings model the reader will dis- cover explicitly how irreversible dynamics arises from a unitary system-reservoir evolution. The notions of the continuous mode limit, correlation functions, spectral density appear in a natural manner and lead to the evolution equation of the open system in form of a master equation. This sets the stage for the more general analysis of completely positive, trace preserving (CPTP) maps and the study of quantum dynamical semigroups. We motivate and prove the Kraus representation theorem, the dilation theorem and the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) theorem. Working through the exercises (for which full solutions are supplied) will reinforce the ideas introduced in the main text.
We present a general-purpose quantum error correction primitive based on state purification via the SWAP test, which we refer to as purification quantum error correction (PQEC). This method operates on $N$ noisy copies, requires minimally $O(M\log_2 N)$ data qubits to process the $M$-qubit inputs. In a similar way to standard QEC, the purification steps may be interleaved within a quantum algorithm to suppress the logical error rate. No postselection is performed and no knowledge of the state is required. We analyze its performance under a variety of error channels and find that PQEC is highly effective at boosting fidelity and reducing logical error rates, particularly for the depolarizing channel. Error thresholds for the local depolarizing channel are found to be $ 75 \%$ for any register size. For local dephasing, the error threshold is reduced to $ 50 \% $ but may be boosted using twirling.
This is a book about operational probabilistic theories. The standard approach in such theories is from a time forward perspective. In this book we mostly take a time symmetric perspective. This presents a branding problem. Is this a niche book merely about time symmetry? No. This is a comprehensive book about operational probabilistic theories, but mostly from a time symmetric perspective. In fact, this book consists of (1) a simple book about simple operations having simple causal structure (where all the inputs are before all the outputs), and (2) a complex book about complex operations that can have complicated causal structure (a complex operation is equipped with a causal diagram). For the simple case we are able to show that the time symmetric perspective is equivalent to the time forward perspective. In each book we set up (A) operational probabilistic theories (OPTs) in terms of operations, (B) Operational Quantum Theory (OQT) in terms of operator tensors which correspond to operations, and (C) the theory of Hilbert objects which can be doubled up to give operator tensors. Operations are required to be physical which guarantees that circuits built out of operations have probabilities between 0 and 1 and that certain causality conditions are met. We prove that when we wire together operations the resulting networks are also physical. We model Sorkin's impossible measurements with complex operations and show that physicality prevents anomalous signalling. We develop diagrammatic notation for Hilbert objects. This includes mirrors for doubling up and mirror theorems. We use this framework to prove time symmetric causal dilation theorems for various causal diagrams.
Certifying that quantum randomness generated by untrusted devices is unpredictable to an attacker (say, Eve) is crucial for device-independent security. Bipartite protocols where only one of the parties is trusted are termed one-sided device-independent (1SDI) or steering-based protocols, where the untrusted party (say, Alice) performs measurements on her part of a bipartite entangled state to steer the subsystem of the trusted party (say, Bob) into different ensembles (collectively, an assemblage) of quantum states. Recent work has shown that an assemblage has certified randomness if and only if it is realizable by a set of measurements that are star-incompatible, i.e., the measurement setting of interest for the guessing probability of Eve is incompatible with at least one of the remaining measurement settings of Alice. However, it remains conceivable that there exist star-incompatible measurements that cannot certify steering-based randomness, just like there exist incompatible measurements that cannot certify bipartite Bell nonlocality. Here we prove that any set of star-incompatible measurements can generate steering-based randomness, thereby establishing an equivalence between the two notions. We further introduce a weight-based measure of star-incompatibility and lower bound the amount required to certify a given randomness, capturing the qualitative and quantitative interplay between the quantum resources of star-incompatibility and steering-based randomness.
Simulating real-time dynamics under a Hamiltonian is a central goal of quantum information science. While numerous Hamiltonian-simulation quantum algorithms have been proposed, the effects of physical noise have rarely been incorporated into performance analysis, despite the non-negligible noise levels in quantum devices. In this work, we analyze noisy Hamiltonian simulation with quantum error mitigation for Trotterized and randomized LCU-based Hamiltonian simulation algorithms. We give an end-to-end comprehensive complexity analysis of error-mitigated Hamiltonian simulation algorithms using the mean-squared error. Because quantum error mitigation incurs an exponential cost with the number of layers in quantum algorithms, there is a trade-off between the sampling cost and the bias in simulation accuracy or the algorithmic sampling overhead. Optimizing this trade-off, we derive an analytic depth-selection rule and characterize the optimal end-to-end scaling as a function of target accuracy and noise parameters. We further quantify the noise-characterization cost required for error mitigation via gate set tomography and the recently proposed space-time noise inversion method, showing that the latter can significantly reduce the characterization overhead.
Quantum generative modeling has emerged as a promising application of quantum computers, aiming to model complex probability distributions beyond the reach of classical methods. In practice, however, training such models often requires costly gradient estimation performed directly on the quantum hardware. Crucially, for certain structured quantum circuits, expectation values of local observables can be efficiently evaluated on a classical computer, enabling classical training without calls to the quantum hardware in the optimization loop. In these models, sampling from the resulting circuits can still be classically hard, so inference must be performed on a quantum device, yielding a potential computational advantage. In this work, we introduce a photonic quantum generative model built on parametrized Gaussian Boson Sampling circuits. The training is based on the efficient classical evaluation of expectation values enabled by the Gaussian structure of the state, allowing scalable optimization of the model parameters through the maximum mean discrepancy loss function. We demonstrate the effectiveness of the approach through numerical experiments on photonic systems with up to 805 modes and over a million trainable parameters, highlighting its scalability and suitability for near-term photonic quantum devices.
We present an efficient protocol leveraging classical computation to support Initial State Preparation for strongly correlated fermionic systems, a critical bottleneck for fault-tolerant quantum simulation. Focusing on nuclear shell model eigenstates, we first demonstrate that the Density Matrix Renormalization Group algorithm can efficiently approximate target states as Matrix Product States, capitalizing on the favourable entanglement structure of these fermionic systems. These high-fidelity approximations are then leveraged as a classical resource in a variational circuit optimization scheme to compile shallow quantum circuits. We establish concrete resource estimates by decomposing the resulting circuits into the industry-standard Clifford$+T$ gateset, exploring the benefits of specialized $U3$ synthesis techniques. For all nuclear systems tested, on up to 76 qubit Hamiltonians, we consistently find low $T$-count circuits preparing the nuclear eigenstates to high fidelity with $\sim 2\times 10^4$ total $T$ gates. This low number gives confidence these eigenstates can be prepared on early fault-tolerant quantum computers. Our work establishes a viable path toward practical ground state preparation for nuclear structure and other fermionic applications.
We construct unclonable encryption (UE) in the Haar random oracle model, where all parties have query access to $U,U^\dagger,U^*,U^T$ for a Haar random unitary $U$. Our scheme satisfies the standard notion of unclonable indistinguishability security, supports reuse of the secret key, and can encrypt arbitrary-length messages. That is, we give the first evidence that (reusable) UE, which requires computational assumptions, exists in "micocrypt", a world where one-way functions may not exist. As one of our central technical contributions, we build on the recently introduced path recording framework to prove a natural ``unitary reprogramming lemma'', which may be of independent interest.
We investigate the precision limits and optimal protocols for sensing single qubit signals in the presence of erasure noise. We study a hierarchy of precision limits achievable with metrological strategies of differing complexity, and identify the optimal protocol for each. The detectability of erasure noise is shown to lead to enhanced precision limits and simplified sensing protocols. For energy gap estimation, we demonstrate that a simple product-state continuous erasure detection strategy yields significant improvements, outperforming optimal entangled protocols even for large numbers of qubits. We show that for other single-qubit signals, quantum error correction provides a substantial advantage by correcting the dominant erasure processes, and can restore Heisenberg-limited precision in certain erasure configurations. As a byproduct of our analysis, we find erasure-conversion schemes for qubits subject to thermal noise that attain the corresponding ultimate precision limits.
Quantum channel discrimination is a fundamental task in quantum information processing. In the one-shot regime, discrimination between two candidate channels is characterized by the diamond norm. Beyond this basic setting, however, many scenarios in distributed quantum information processing remain unresolved, motivating notions of distinguishability that capture the power of the available resources. In this work, we formulate a theory of testers for bipartite channel discrimination, leading to the concept of the entanglement cost of bipartite channel discrimination: the minimum Schmidt rank $k$ of a shared maximally entangled state required for local protocols to achieve the globally optimal success probability. We introduce $k$-injectable testers as a tester-based description of entanglement-assisted local discrimination and, in particular, study the class of $k$-injectable positive-partial-transpose (PPT) testers, which constitutes a numerically tractable relaxation of the practically relevant class of LOCC testers. For every $k$, we derive a semidefinite program (SDP) for the optimal success probability, which in turn yields an efficiently computable one-shot PPT entanglement cost. To render these optimization problems numerically feasible, we prove a symmetry-reduction principle for covariant channel pairs, thereby reducing the effective dimension of the associated SDPs. Finally, by dualizing the SDP, we derive bounds on the composite channel-discrimination problem and illustrate our framework with proof-of-principle examples based on the depolarizing channel, the depolarized SWAP channel, and the Werner--Holevo channels.
We present an algebraic framework for approximate model reduction of Markovian open quantum dynamics that guarantees complete positivity and trace preservation by construction. First, we show that projecting a Lindblad generator on its center manifold -- the space spanned by eigenoperators with purely imaginary eigenvalue -- yields an asymptotically exact reduced quantum dynamical semigroup whose dynamics is unitary, with exponentially decaying transient error controlled by the generator's spectral gap. Second, for analytic perturbations of a Lindblad generator with a tractable center manifold, we propose a perturbative reduction that keeps the reduced space fixed at the unperturbed center manifold. The resulting generator is shown to remain a valid Lindbladian for arbitrary perturbation strengths, and explicit finite-time error bounds, that quantify leakage from the unperturbed center sector, are provided. We further clarify the connection to adiabatic elimination methods, by both showing how the algebraic reduction can be directly related to a first-order adiabatic-elimination and by providing sufficient conditions under which the latter method can be applied while preserving complete positivity. We showcase the usefulness of our techniques in dissipative many-body quantum systems exhibiting non-stationary long-time dynamics.
We introduce an algebraic structure for studying state-independent contextuality arguments, a key form of quantum non-classicality exemplified by the well-known Peres-Mermin magic square, and used as a source of quantum advantage. We introduce \emphcommutation groups presented by generators and relations, and analyse them in terms of a string rewriting system. There is also a linear algebraic construction, a directed version of the Heisenberg group. We introduce \emphcontextual words as a general form of contextuality witness. We characterise when contextual words can arise in commutation groups, and explicitly construct non-contextual value assignments in other cases. We give unitary representations of commutation groups as subgroups of generalized Pauli $n$-groups.
High-fidelity quantum operations are the cornerstone of fault-tolerant quantum computation. In open quantum systems, traditional optimal control only passively resists decoherence, leaving environment-induced uncertainty as a fundamental performance bottleneck. To overcome this, we propose a new optimal control framework with flag ancillas and the Flag-GRAPE algorithm, which can actively tailor the system's noise structure. Through embedding post-selection directly into the objective function, Flag-GRAPE correlates decoherence errors with the ancilla's unexpected state. Subsequent measurement and post-selection effectively expel this uncertainty, circumventing the fidelity bounds of traditional control. Numerical simulations in a superconducting quantum circuit demonstrate a $51\%$ reduction in infidelity compared to traditional closed-system pulses and also show that such enhancement is robust across broad noise regimes. Furthermore, by actively converting unstructured decoherence into heralded erasure errors, Flag-GRAPE is inherently compatible with quantum error correction. We demonstrate this by initializing a logical cat-code state, showing that the combination between Flag-GRAPE and QEC yields immediate state preparation enhancements. This new framework can reduce hardware overhead for fault-tolerant architectures and open up a practical path toward logical state preparation gain in near-term experiments.
This paper investigates quantum simulation algorithms for the Liouville equation in geometrical optics with partial transmission and reflection at sharp interfaces, based on the Schrödingerization method. By means of a warped phase transformation in one higher dimension, the Schrödingerization method converts linear partial differential equations into a system of Schrödinger-type equations with unitary evolution, thereby rendering them suitable for quantum simulation. In this work, the Schrödingerization method is combined with a Hamiltonian-preserving scheme that incorporates partial transmission and reflection into the numerical flux. A main difficulty is that the interface treatment in the classical scheme relies on threshold-dependent "if/else" procedures, making it highly nontrivial to reformulate the method in a matrix form suitable for quantum simulation. To overcome this difficulty, we encode the interface conditions into a partial transmission and reflection matrix prepared a priori, rather than during the time evolution. We present detailed constructions of the resulting quantum algorithms and show through complexity analysis that the proposed methods achieve polynomial quantum advantage in the precision parameter $\epsilon$ over their classical counterparts.
Efficient low-energy state preparation is a key objective in quantum computation and quantum simulation. Quantum imaginary-time evolution replaces real-time dynamics with imaginary-time dynamics, exponentially suppressing higher-energy eigenstates. We introduce deterministic unitary protocols that approximate imaginary-time evolution for ground-state preparation. The protocols require multiple copies of the system, real-time evolution under the system Hamiltonian, and controlled-SWAP operations (or more general SWAP-generated unitaries). We analyze two concrete circuit families: a tree architecture with provable polynomial-in-depth convergence but rapidly growing width, and a compact "hedge" architecture that achieves comparable accuracy with only polynomial width in a heuristic construction supported by numerics. We provide numerical evidence that mid-circuit post-selection can accelerate convergence with practical success probabilities. Separately, we demonstrate that circuit volume can be traded for the shot complexity of post-circuit observable estimation in the ground-state preparation setting. We outline concrete implementation of platform-specific routes, where multi-copy registers and SWAP-mediated couplings are natural, thereby illustrating how these hybrid analog-digital circuits can complement existing state-preparation methods in the near term.
Randomized compiling (RC) is an established tool to tailor arbitrary quantum noise channels into Pauli errors. The effect of both spatial and temporal noise correlations in randomly compiled circuits, however, is not fully understood. Here, we show that for a broad class of correlated Gaussian noise, RC reduces both the strength and temporal range of correlations. For Clifford circuits, we derive a simple analytical expression for the circuit fidelity of randomly compiled circuits. Surprisingly, we show that this fidelity is always increased by the presence of correlations, suggesting that correlations are a resource in randomly compiled circuits. To leading order in system-bath coupling, we also show that RC suppresses the quantum component of bath correlations, implying that one can safely treat weak noise as being classical. Finally, through extensive numerical simulations, we show that our results remain valid for many relevant non-Clifford circuits. These results clarify how RC mitigates memory effects and enhances circuit robustness.
The Kraus representation of quantum channels allows for a precise emulation of the complex dynamics that take place on quantum processors, whether for benchmarking algorithms, predicting the performance of error correction and mitigation, or in the myriad other uses of compiled digital sequences. Nonetheless, starting from first principles to obtain continuous quantum master equations involves various approximations such as weak coupling to the environment. Further, converting these equations to Kraus operators cannot generally be obtained in closed-form due to the complicated commutator structure of the problem. In our work, we bridge this gap by providing a general closed form formulation for arbitrarily strong driving while remaining linear in the dissipator. The Kraus solution is expressed as a Riemann sum where higher terms can converge quickly to high precision, which we demonstrate numerically. Such a formulation is highly relevant to quantum computing and gate-based models, where effective models are highly sought for large rotation gate angles, even under the influence of underlying non-trivial noise mechanisms.
The Bernstein-Vazirani (BV) algorithm is frequently taught as a canonical example of quantum parallelism, yet the standard interference-based explanation often obscures its underlying simplicity. We present a geometric reframing in which the Hadamard gate "wrapping" acts as a global basis rotation rather than a generator of computational complexity. This perspective reveals that the algorithm is effectively a classical linear computation over GF(2) performed in the conjugate Fourier basis, with the apparent parallelism arising from coordinate transformation. Building on Mermin's earlier pedagogical shortcut, which presented a 'classical' circuit equivalent but stopped short of explicitly labeling it as such, we elevate this to a formal geometric framework. In the extension, we distinguish between globally rotated circuits--which we reveal as classical linear computations--and topologically twisted circuits that generate quantum entanglement. We introduce a pedagogical taxonomy distinguishing (1) pure computational-basis circuits, (2) globally rotated circuits (exemplified by Bernstein-Vazirani), and (3) topologically twisted circuits involving non-aligned subsystem bases. This framework allows viewing the Gottesman-Knill theorem from a new angle, extends students' understanding of phase kickback and the 'Ricochet Property'. Furthermore, it provides a more intuitive starting point for explaining Bell-pair extensions through concrete circuit derivations and Qiskit simulations suitable for undergraduate quantum information courses. The outlook explores how this geometric view paves the way for understanding entanglement as topological twists.
Nicolas Gosling, Denis Bénâtre, Nicolas Zapata, Paul Kugler, Mitchell Field, Sumeru Hazra, Simon Günzler, Thomas Reisinger, Martin Spiecker, Mathieu Féchant, Ioan M. Pop Achieving fault tolerance with superconducting quantum processors requires qubits to operate within the regime of threshold theorems based on the Born-Markov approximation. This approximation, which models dissipation as constant energy decay into a memoryless environment, breaks down when qubits couple to long-lived two-level systems (TLSs) that become polarized during operation and retain memory of past qubit states. Here, we show that non-Poissonian quantum jump traces carry the information required to distinguish long-lived TLSs from the standard Markovian bath. By fitting the Solomon equations to measured quantum jumps dynamics arising naturally due to thermal fluctuations, we can disentangle the coupling of the qubit to the two environments. Sweeping the qubit frequency reveals distinct peaks, each associated with a TLS that outlives the qubit, providing a handle to understand their microscopic origin.
We explore the sense in which the existing constructions for higher-order maps on quantum theory based on causality constraints and compositionality constraints respectively, coincide. More precisely, we construct a functor F : Caus(C) -> StProf(C1) from higher-order causal categories to the category of strong profunctors over first-order causal processes that is lax-lax duoidal, full, faithful, and strongly closed whenever C is additive. When C = CP this embedding is furthermore strong on the sequencer for duoidal categories, expressing the possibility to interpret one-way signalling (but not general non-signalling) constraints in terms of the coend calculus for profunctors. We conclude that insofar as compositional constraints can be used to express causality constraints, the profunctorial approach generalises higher-order quantum theory to a construction over general symmetric monoidal categories.
Fast and high-fidelity qubit measurement plays a key role in quantum error correction. In superconducting qubits, measurement is typically performed using a resonant microwave drive on a readout resonator dispersively coupled to the qubit. Shorter measurement times require larger numbers of photons populating the readout resonator, which ultimately leads to undesired measurementinduced state transitions (MIST) of the qubit. MIST can be particularly problematic because these transitions often leave the qubit in a high energy state, and the MIST locations in readout parameter space drift as a function of qubit offset charge. In transmon qubits, these drifts have been avoided using very large qubit-resonator detunings or dedicated offset charge biases. In this work, we take an alternative approach and add an inductive shunt to the transmon to eliminate the offset charge dependence and stabilize the MIST. We experimentally characterize MIST in several different inductively-shunted transmons, in agreement with quantum and semiclassical models for MIST. These results extend to other inductively-shunted qubits.
In quantum mechanics, not everything that can be observed can be observed simultaneously. Observational data exhibits \emphcontextuality -- a generalisation of nonlocality -- if the result of an observation is necessarily dependent on which combination of observables was measured. This article gives a mathematical introduction to contextuality, emphasising its nature as a general feature of probability theory and logic, rather than of any particular quantum theory.
Superconducting quantum circuits are promising platforms for scalable quantum computing, where qubit coherence is critically determined by microscopic defects in the oxide tunneling barrier of Josephson junctions. Amorphous Al$_2$O$_3$ is widely used as a barrier material, but under irradiation, oxygen vacancy (V$_O$) defects are readily generated, introducing noise sources that accelerate qubit decoherence. We systematically investigate the structural characteristics and electronic impact of V$_O$ defects in amorphous Al$_2$O$_3$ using first-principles calculations and \textitab initio molecular dynamics. Our results show that both the coordination environment and concentration of V$_O$s strongly influence electrical conductivity. In particular, two- and three-coordinated V$_O$s, unique to the amorphous structure, enhance conductivity more than conventional four-coordinated vacancies. Increasing V$_O$ concentration amplifies conductivity fluctuations, which we link to critical current noise in Josephson junctions. Using a noise model, we estimate that higher V$_O$ densities lead to shorter qubit coherence times. These findings provide insights for radiation-hard design of superconducting quantum devices.
ArXiv:2508.17898 proposed the booklet wormhole as the holographic dual of the GHZ state. This paper extends the investigation into this geometry, particularly focusing on the junction conditions for matter fields. We show that the symmetry of the GHZ state requires the bulk to admit special Killing vector fields that standard manifolds cannot realize. Moreover, these bulk symmetries require unprecedented quantum non-local junction conditions at the multi-way interface: Observers entering from different horizons will perceive different states inside the wormhole, where the junction conditions appear as constraints on the observables of different sets of observers. We finally discuss how to render booklet wormholes traversable via boundary deformations. A localized wave packet injected from one page generally evolves into a non-local mixed state on each remaining page, with the information encoded in the entanglement between different pages.
Dielectric loss at the interfaces of superconducting films has long been recognized as limiting the performance of state-of-the-art superconducting circuits. Notably, the presence of a native oxide layer on the film is hypothesized to contribute to dielectric loss at the metal-air interface. Here, we explore rhenium as a candidate for the film, motivated by its remarkable property to suppress native oxide formation. We demonstrate rhenium on sapphire as a promising material platform for superconducting circuits through the realization of transmons with mean relaxation times $T_1$ up to 407 microseconds at 5 GHz. Our transmons are supplemented with a loss characterization study, in which we separate the dominant loss mechanisms and construct a loss budget that agrees with our $T_1$ measurements. Further characterization may establish rhenium as a leading candidate for maximizing decoherence time.
We develop a microscopic theory of thermalisation for a thermometer coupled to a many-body bath beyond standard Markovian and Fermi-golden-rule assumptions. By modeling interaction matrix elements in the non-interacting basis as independent random variables, we derive a diffusion-propagator expression for the reduced dynamics and show that relaxation is controlled by the distribution of interaction-induced level broadenings. The theory predicts a thermalisation timescale set by the inverse typical broadening and yields a non-Markovian generalization of global balance. Exact-diagonalization tests for heavy-tailed Lévy couplings, an all-to-all transverse-field Ising model, and the one-dimensional Imbrie model show good agreement with these predictions.
Loop corrections to primordial correlation functions are unavoidable due to the non-linear nature of gravity. Previous works have established a robust framework for computing the renormalised one-loop power spectra of scalar and tensor modes, but primarily in (near) de Sitter backgrounds. In this work, we develop a consistent renormalisation procedure applicable to inflationary backgrounds that strongly break de Sitter symmetries and generate scale-dependent features in the primordial spectra. Our analysis is performed within the Effective Field Theory (EFT) of inflationary fluctuations, allowing for arbitrary time dependence of the Wilson coefficients. We show that both ultraviolet divergences and tadpoles of the theory, despite their strong time and scale dependence, can be cancelled by a finite set of local counter-terms compatible with the EFT symmetries. Importantly, this result only relies on the existence of an initial phase of adiabatic evolution continuously related to the Bunch-Davies vacuum and holds independently of the precise time dependence of the background and of the free-field mode functions. We then study two concrete realisations, corresponding to resonant and sharp features. In both cases, all calculations are carried out exactly in the limit of small feature amplitude. We analyse perturbativity and provide the first explicit demonstration that the renormalised one-loop power spectrum generated by a localised feature along the inflationary trajectory vanishes both at large and small scales. Our scale-dependent renormalisation framework implies that models of primordial features used to fit CMB residuals are consistent with perturbativity bounds, and opens the door to systematic studies of loop corrections in more complicated scenarios relevant for scalar-induced gravitational waves and primordial black holes.
Living systems are open nonequilibrium systems that continuously exchange energy, matter, and information with their environments, leading to stochastic dynamics with memory and active fluctuations. In this study, we develop a non-Markovian theoretical framework for the entropy dynamics of living systems based on the Keldysh functional formalism and stochastic thermodynamics. The approach naturally incorporates colored environmental noise, memory-dependent dissipation, and many-body interactions, yielding generalized Langevin dynamics and non-Markovian master equations. Within this framework we derive an exact frequency-domain expression for the entropy production rate and show that violations of the fluctuation-dissipation relation provide a direct thermodynamic signature of active biological fluctuations. We further demonstrate that environmental memory enhances low-frequency fluctuations and entropy production, leading to critical slowing down near dynamical instability. These results provide a microscopic physical foundation for the entropy "bathtub" picture of living systems and connect entropy evolution with development, aging, and death in nonequilibrium dynamics.
Translating complex reinforcement learning (RL) environments into high-performance implementations has traditionally required months of specialized engineering. We present a reusable recipe - a generic prompt template, hierarchical verification, and iterative agent-assisted repair - that produces semantically equivalent high-performance environments for <$10 in compute cost. We demonstrate three distinct workflows across five environments. Direct translation (no prior performance implementation exists): EmuRust (1.5x PPO speedup via Rust parallelism for a Game Boy emulator) and PokeJAX, the first GPU-parallel Pokemon battle simulator (500M SPS random action, 15.2M SPS PPO; 22,320x over the TypeScript reference). Translation verified against existing performance implementations: throughput parity with MJX (1.04x) and 5x over Brax at matched GPU batch sizes (HalfCheetah JAX); 42x PPO (Puffer Pong). New environment creation: TCGJax, the first deployable JAX Pokemon TCG engine (717K SPS random action, 153K SPS PPO; 6.6x over the Python reference), synthesized from a web-extracted specification. At 200M parameters, the environment overhead drops below 4% of training time. Hierarchical verification (property, interaction, and rollout tests) confirms semantic equivalence for all five environments; cross-backend policy transfer confirms zero sim-to-sim gap for all five environments. TCGJax, synthesized from a private reference absent from public repositories, serves as a contamination control for agent pretraining data concerns. The paper contains sufficient detail - including representative prompts, verification methodology, and complete results - that a coding agent could reproduce the translations directly from the manuscript.
The `15-minute city' has emerged as a central paradigm in urban planning, promoting universal access to work and essential services within short travel times. Its feasibility-particularly for commuting to work-has however rarely been examined quantitatively. Here, we show that proximity to employment is fundamentally constrained by the internal structure of urban economies. Combining urban geometry with empirically observed firm-size distributions, we derive a lower bound on commuting times that holds independently of planning choices or transport technologies. This bound reveals a sharp transition: when employment is sufficiently concentrated, no spatial rearrangement of workplaces can ensure uniformly short commutes, even under optimal placement. Applied to Paris and its near suburbs, we find that achieving universal 15-minute commutes would require substantial economic restructuring or differentiated mobility strategies. The relevant question is therefore not whether an $x$-minute city is achievable, but what the minimal feasible $x$ is given a city's economic structure and spatial scale.
Mar 13 2026
cs.DS arXiv:2603.12052v1
The classic pivot based clustering algorithm of Ailon, Charikar and Chawla [JACM'08] is factor 3, but all concrete examples showing that it is no better than 3 are based on some very good clusters, e.g., a complete graph minus a matching. By removing all good clusters before we make each pivot step, we show that this improves the approximation ratio to $2.9991$. To aid in this, we also show how our proposed algorithm performs on synthetic datasets, where the algorithm performs remarkably well, and shows improvements over both the algorithm for locating good clusters and the classic pivot algorithm.
Submodular maximization constitutes a prominent research topic in combinatorial optimization and theoretical computer science, with extensive applications across diverse domains. While substantial advancements have been achieved in approximation algorithms for submodular maximization, the majority of algorithms yielding high approximation guarantees are randomized. In this work, we investigate deterministic approximation algorithms for maximizing non-monotone submodular functions subject to matroid and knapsack constraints. For the two distinct constraint settings, we propose novel deterministic algorithms grounded in an extended multilinear extension framework. Under matroid constraints, our algorithm achieves an approximation ratio of $(0.385 - \epsilon)$, whereas for knapsack constraints, the proposed algorithm attains an approximation ratio of $(0.367 -\epsilon)$. Both algorithms run in $\mathrm{poly}(n)$ query complexity, where $n$ is the size of the ground set, and improve upon the state-of-the-art deterministic approximation ratios of $(0.367 - \epsilon)$ for matroid constraints and $0.25$ for knapsack constraints.
Quantum teleportation uses a shared entangled resource, local operations, and a digitally error-corrected classical channel to transfer quantum states between distant parties. We introduce a hybrid teleportation-direct transmission protocol for state transfer that still exploits entanglement, but replaces classical communication and digital error correction with an analog feedforward through a noisy quantum channel. We show that quantum teleportation outperforms this protocol if the communication channel reduces the entanglement of all bipartite states having the same amount of entanglement as the resource; otherwise, the hybrid protocol is optimal. We apply our result to the state transfer of a uniformly distributed coherent-states codebook, highlighting experimentally relevant scenarios where our protocol is most effective. Our findings are directly relevant to both optical and superconducting microwave channels, where analog feedforward techniques have been recently implemented.
The computational universality with an elementary gate set $\{H,CCZ\}$ can be transformed to the strict universality by using a maximally imaginary state $|+i\rangle$ and some non-imaginary ancillary qubits. From the viewpoint of operational resource theory, it would be intriguing to elucidate a resource for the universality transformation. In this paper, we explore a necessary and sufficient condition for resource states to realize the universality transformation under free real operations. We show that $|+i\rangle$ is a unique resource state up to the free operations. Moreover, we obtain a stronger conclusion. If a given resource state cannot be used for the universality transformation, then realizable quantum gates are restricted to real orthogonal matrices. Therefore, we can tell that $|+i\rangle$ is unique (up to the free operations) not only as a state whose resource measure of imaginarity is maximal, but also as a state which empowers real operations with the ability to apply at least one non-real quantum gate (regardless of the magnitudes of its imaginary parts).
We show how to systematically construct weak integrability breaking perturbations (WIBs) for classical integrable models on the lattice. These perturbations, which allow quasi-conserved quantities, have mostly been explored in quantum systems, where they are expected to delay the onset of thermalization and diffusive transport to timescales far exceeding those predicted by Fermi's golden rule. However, accessing such long-time dynamics in quantum models is computationally challenging. Classical integrable lattice models offer a complementary setting for probing transport and long-time dynamics under WIBs. In this work, we specialize our general framework to construct several families of WIBs for the Ishimori model, the Toda chain, and the Harmonic Oscillator Chain (HOC). Such constructions can help quantify how WIBs contribute to anomalous transport and serve as a benchmark for thermalization studies in perturbed integrable models. An important example is the Fermi-Pasta-Ulam-Tsingou (FPUT) model: Starting from the HOC, we show that the cubic nonlinearity (the alpha-FPUT interaction) is a genuine WIB perturbation. Using the integrals of motion (IoMs) of the Toda lattice, we explicitly construct corrections to the entire hierarchy of the HOC IoMs, thereby obtaining an infinite tower of quasi-conserved quantities for the alpha-FPUT chain. We further identify the corresponding adiabatic gauge potential (AGP) as a nontrivial trilocal generator in real space, and show that, more generally, any cubic, translationally invariant, momentum-conserving perturbation of the HOC admits such a generator and is therefore a WIB. Together with our transport and AGP-variance studies, our results provide a unified classical framework for weak integrability breaking and for diagnosing anomalous thermalization and transport in nearly integrable Hamiltonian lattice systems.
Silu Zhao, Li Li, Weiping Yuan, Xinhui Ruan, Jinzhe Wang, Bingjie Chen, Yunhao Shi, Guihan Liang, Shi Xiao, Jiacheng Song, Jinming Guo, Xiaohui Song, Kai Xu, Heng Fan, Zhongcheng Xiang, Dongning Zheng We demonstrate high-fidelity single-qubit gates on a C-shunt flux qubit that simultaneously combines a large anharmonicity ($\mathcal{A}/2\pi=848~\mathrm{MHz}$) with long relaxation time ($T_1 = 23~\mu\text{s}$). The large anharmonicity significantly suppresses leakage to higher energy levels, enabling fast and precise microwave control. Using DRAG pulses and randomized benchmarking, the qubit achieves gate fidelities exceeding 99.9\%, highlighting the capability of C-shunt flux qubits for robust and high-performance quantum operations. These results establish them as a promising platform for scalable quantum information processing.
Policy gradient algorithms have driven many recent advancements in language model reasoning. An appealing property is their ability to learn from exploration on their own trajectories, a process crucial for fostering diverse and creative solutions. As we show in this paper, many policy gradient algorithms naturally reduce the entropy -- and thus the diversity of explored trajectories -- as part of training, yielding a policy increasingly limited in its ability to explore. In this paper, we argue that entropy should be actively monitored and controlled throughout training. We formally analyze the contributions of leading policy gradient objectives on entropy dynamics, identify empirical factors (such as numerical precision) that significantly impact entropy behavior, and propose explicit mechanisms for entropy control. These include REPO, a family of algorithms that modify the advantage function to regulate entropy, and ADAPO, an adaptive asymmetric clipping approach. Models trained with our entropy-preserving methods maintain diversity throughout training, yielding final policies that are more performant and retain their trainability for sequential learning in new environments.
Fracton phases are new types of phases of matter characterized by subsystem global symmetry, which is a generalized global symmetry whose symmetry operator is partially topological. Their continuum low-energy effective descriptions admit two different formulations: an exotic quantum field theory (QFT) using exotic tensor gauge fields, and a foliated QFT constructed from a foliation structure and foliated gauge fields. For certain fracton QFTs, these two descriptions are equivalent, which is called the foliated-exotic duality. In this dissertation, we extend the foliated-exotic duality by combining it with the anomaly inflow mechanism for 't Hooft anomalies of subsystem symmetries. This dissertation has two main results. First, we discuss the exotic and foliated $BF$ theories in 2+1 dimensions, which exhibit the mixed 't Hooft anomaly of $\mathbb{Z}_N \times \mathbb{Z}_N$ subsystem symmetry. This anomaly is captured by a subsystem symmetry-protected topological (SSPT) phase for $\mathbb{Z}_N \times \mathbb{Z}_N$ subsystem symmetry in one dimension higher. By extending the foliated-exotic duality in the fractonic $BF$ theory to the SSPT phase, we establish the field correspondences in the SSPT phase and construct the foliated description of the SSPT phase. Second, we discuss the exotic $\phi$-theory in 2+1 dimensions -- a fractonic gapless scalar field theory, which has the 't Hooft anomaly of $U(1) \times U(1)$ subsystem symmetry. The anomaly is captured by an SSPT phase for $U(1) \times U(1)$ subsystem symmetry in 3+1 dimensions via the anomaly inflow mechanism. Extending the foliated-exotic duality to the $\phi$-theory, we establish field correspondences in the $\phi$-theory and construct the foliated $\phi$-theory that is equivalent to the exotic $\phi$-theory. This provides the first example of the foliated-exotic duality in gapless theories.
Linear quantum amplifiers are indispensable tools for quantum technologies, yet their performance is fundamentally limited by quantum noise, precluding any signal-to-noise ratio (SNR) enhancement unless supplemented by post-selection or non-classical resources. To surpass this limitation, we propose a nonlinear quantum amplification strategy that exploits the interplay between a gain-stabilized bright eigenmode of a coupled two-mode bosonic system and Kerr nonlinearity. We demonstrate that this interplay enables the signal gain to surpass the noise gain in a selected quadrature, leading to a net increase in the SNR beyond the quantum limits of conventional linear amplifiers. Our work thus establishes a novel nonlinear amplification paradigm capable of enhancing the SNR, with promising applications across quantum information processing, quantum communications, and quantum metrology.
We establish a framework for realizing back-action-evading (BAE) measurements and quantum non-demolition (QND) variables in linear quantum systems. The key condition, a purely imaginary Hamiltonian with a real or imaginary coupling operator, enables BAE measurements of conjugate observables. Symmetric coupling further yields QND variables. For non-compliant systems, coherent feedback can engineer BAE measurements. Crucially, the QND interaction condition simultaneously ensures BAE measurements and promotes the coupling operator to a QND observable. This work provides a unified structural theory for enhancing precision in quantum metrology and sensing.
Electron-phonon (e-ph) interactions play a crucial role in determining many properties of materials. In this context, the Su-Schrieffer-Heeger (SSH) model, where atomic motion modulates the electronic hopping, has gained significant attention due to its potential for strong electron pairing in relation to high-Tc superconductivity. Previous studies of the SSH models have addressed many aspects of this problem, but have focused heavily on either dilute or half-filled models with dispersionless (Einstein) phonons. Here, we study the effects of dispersive optical phonons on the lightly doped one-dimensional optical Hubbard-SSH model using the density matrix renormalization group. We observe a significant enhancement in singlet binding driven by phonon dispersion; however, by calculating various correlation functions, we find that the enhanced binding does not translate to increased superconducting correlations but rather robust bond correlations in the studied parameter regime. Nevertheless, the significant impact of phonon dispersion on these correlations highlights the need to go beyond the Einstein phonon limit while modeling realistic quantum materials.
The transformer has revolutionized modern AI across language, vision, and beyond. It consists of $L$ layers, each running $H$ attention heads in parallel and feeding the combined output to the subsequent layer. In attention, the input consists of $N$ tokens, each a vector of dimension $m$. The attention mechanism involves multiplying three $N \times m$ matrices, applying softmax to an intermediate product. Several recent works have advanced our understanding of the complexity of attention. Known algorithms for transformers compute each attention head independently. This raises a fundamental question that has recurred throughout TCS under the guise of ``direct sum'' problems: can multiple instances of the same problem be solved more efficiently than solving each instance separately? Many answers to this question, both positive and negative, have arisen in fields spanning communication complexity and algorithm design. Thus, we ask whether transformers can be computed more efficiently than $LH$ independent evaluations of attention. In this paper, we resolve this question in the negative, and give the first non-trivial computational lower bounds for multi-head multi-layer transformers. In the small embedding regime ($m = N^{o(1)}$), computing $LH$ attention heads separately takes $LHN^{2 + o(1)}$ time. We establish that this is essentially optimal under SETH. In the large embedding regime ($m = N$), one can compute $LH$ attention heads separately using $LHN^{\omega + o(1)}$ arithmetic operations (plus exponents), where $\omega$ is the matrix multiplication exponent. We establish that this is optimal, by showing that $LHN^{\omega - o(1)}$ arithmetic operations are necessary when $\omega > 2$. Our lower bound in the large embedding regime relies on a novel application of the Baur-Strassen theorem, a powerful algorithmic tool underpinning the famous backpropagation algorithm.
The $k$-Opt algorithm is a local search algorithm for the traveling salesman problem. Starting with an initial tour, it iteratively replaces at most $k$ edges in the tour with the same number of edges to obtain a better tour. Krentel (FOCS 1989) showed that the traveling salesman problem with the $k$-Opt neighborhood is complete for the class PLS (polynomial time local search). However, his proof requires $k \gg 1000$ and has a substantial gap. We provide the first rigorous proof for the PLS-completeness and at the same time drastically lower the value of $k$ to $k \geq 15$, addressing an open question by Monien, Dumrauf, and Tscheuschner (ICALP 2010). Our result holds for both the general and the metric traveling salesman problem.
We study the fundamental limitations of implementing time-dependent Hamiltonian protocols when ''time'' is provided by a quantum clock rather than an external classical parameter. For a parametric harmonic oscillator controlled through a shortcut-to-adiabaticity (STA) schedule and coupled to a minimal clock degree of freedom, tracing out the clock yields an effective reduced dynamics that is a mixture of unitary Gaussian trajectories. Within a noise-dominated regime, we compute the energetic deviation from the target STA outcome and its fluctuations, together with the fidelity to the target evolution and the purity loss of the reduced state, for vacuum and coherent initial states. Combining these observables produces a thermodynamic-uncertainty-type tradeoff that links achievable precision to an irreducible loss of purity set by the clock precision and the protocol sensitivity.
We study k Kadison Schwarz (k KS) mappings on matrix algebras and derive explicit conditions ensuring the k KS property for two classes of maps parameterized by a single k-positive map.