Top arXiv papers

sign in to customize
  • PDF
    Continuous-variable systems enable key quantum technologies in computation, communication, and sensing. Bosonic Gaussian states emerge naturally in various such applications, including gravitational-wave and dark-matter detection. A fundamental question is how to characterize an unknown bosonic Gaussian state from as few samples as possible. Despite decades-long exploration, the ultimate efficiency limit remains unclear. In this work, we study the necessary and sufficient number of copies to learn an $n$-mode Gaussian state, with energy less than $E$, to $\varepsilon$ trace distance with high probability. We prove a lower bound of $\Omega(n^3/\varepsilon^2)$ for Gaussian measurements, matching the best known upper bound up to doubly-log energy dependence, and ${\Omega}(n^2/\varepsilon^2)$ for arbitrary measurements. We further show an upper bound of $\widetilde{O}(n^2/\varepsilon^2)$ given that the Gaussian state is promised to be either pure or passive. Interestingly, while Gaussian measurements suffice for nearly optimal learning of pure Gaussian states, non-Gaussian measurements are provably required for optimal learning of passive Gaussian states. Finally, focusing on learning single-mode Gaussian states via non-entangling Gaussian measurements, we provide a nearly tight bound of $\widetilde\Theta(E/\varepsilon^2)$ for any non-adaptive schemes, showing adaptivity is indispensable for nearly energy-independent scaling. As a byproduct, we establish sharp bounds on the trace distance between Gaussian states in terms of the total variation distance between their Wigner distributions, and obtain a nearly tight sample complexity bound for learning the Wigner distribution of any Gaussian state to $\varepsilon$ total variation distance. Our results greatly advance quantum learning theory in the bosonic regimes and have practical impact in quantum sensing and benchmarking applications.
  • PDF
    Quantum error correction (QEC), the lynchpin of fault-tolerant quantum computing (FTQC), is designed and validated against well-behaved Pauli stochastic error models. But in real-world deployment, QEC protocols encounter a vast array of other errors -- coherent and non-Pauli errors -- whose impacts on quantum circuits are vastly different than those of stochastic Pauli errors. The impacts of these errors on QEC and FTQC protocols have been largely unpredictable to date due to exponential classical simulation cost. Here, we show how to accurately and efficiently model the effects of coherent and non-Pauli errors on FTQC, and we study the effects of such errors on syndrome extraction for surface and bivariate bicycle codes, and on magic state cultivation. Our analysis suggests that coherent error can shift fault-tolerance thresholds, increase the space-time cost of magic state cultivation, and can increase logical error rates by an order of magnitude compared to equivalent stochastic errors. These analyses are enabled by a new technique for mapping any Markovian circuit-level error model with sufficiently small error rates onto a detector error model (DEM) for an FTQC circuit. The resulting DEM enables Monte Carlo estimation of logical error rates and noise-adapted decoding, and its parameters can be analytically related to the underlying physical noise parameters to enable approximate strong simulation.
  • PDF
    Post-quantum cryptography currently rests on a small number of hardness assumptions, posing significant risks should any one of them be compromised. This vulnerability motivates the search for new and cryptographically versatile assumptions that make a convincing case for quantum hardness. In this work, we argue that decoding random quantum stabilizer codes -- a quantum analog of the well-studied LPN problem -- is an excellent candidate. This task occupies a unique middle ground: it is inherently native to quantum computation, yet admits an equivalent formulation with purely classical input and output, as recently shown by Khesin et al. (STOC '26). We prove that the average-case hardness of quantum stabilizer decoding implies the core primitives of classical Cryptomania, including public-key encryption (PKE) and oblivious transfer (OT), as well as one-way functions. Our constructions are moreover practical: our PKE scheme achieves essentially the same efficiency as state-of-the-art LPN-based PKE, and our OT is round-optimal. We also provide substantial evidence that stabilizer decoding does not reduce to LPN, suggesting that the former problem constitutes a genuinely new post-quantum assumption. Our primary technical contributions are twofold. First, we give a reduction from random quantum stabilizer decoding to an average-case problem closely resembling LPN, but which is equipped with additional symplectic algebraic structure. While this structure is essential to the quantum nature of the problem, it raises significant barriers to cryptographic security reductions. Second, we develop a new suit of scrambling techniques for such structured linear spaces, and use them to produce rigorous security proofs for all of our constructions.
  • PDF
    Simulations of chemical dynamics are a powerful means for understanding chemistry. However, classical computers struggle to simulate many chemical processes, especially non-adiabatic ones, where the Born-Oppenheimer approximation breaks down. Quantum computers could simulate quantum-chemical dynamics more efficiently than classical computers, but there is currently no complete quantum algorithm for calculating dynamical observables to within a known error. Here, we develop an efficient, end-to-end quantum algorithm for simulating chemical dynamics that avoids all uncontrolled approximations (including the Born-Oppenheimer approximation) and whose error is bounded subject to mild assumptions. To do so, we treat the nuclei and the electrons on an equal footing and simulate the full molecular wavefunction on a momentum-space grid in first quantization, including all algorithmic steps: initial-state preparation, time evolution using qubitization, and measurement of chemical observables such as reaction yields and rates. Our work gives the first algorithm for quantum simulation of chemistry whose end-to-end complexity achieves sublinear scaling in the size of the grid. We achieve this by developing an exponentially faster method for initial-state-preparation. Photochemistry is a likely early application of our algorithm and we estimate resources required for end-to-end simulations of non-adiabatic dynamics of atmospherically important molecules. Classically intractable photochemical computations could be performed using resources comparable to those required for other chemical applications of quantum computing.
  • PDF
    Arithmetic operations are an important component of many quantum algorithms. As such, coming up with optimized quantum circuits for these operations leads to more efficient implementations of the corresponding algorithms. In this paper, we develop new fault-tolerant quantum circuits for various integer division algorithms (both reversible and non-reversible). These circuits, when implemented in the Clifford+T gate set, achieve an up to 76.08\% and 68.35\% reduction in T-count and CNOT-count, respectively, compared to previous circuit constructions. Some of our circuits also improve the asymptotic T-depth from $O(n^2)$ to $O(n \log n),$ where $n$ is the bit-length of the dividend. The qubit counts are also lower than in previous works. We achieve this by expressing the division algorithms in terms of a primitive we call COMP-N-SUB, that compares two integers and conditionally subtracts them. We show that this primitive can be implemented at a cost, in terms of both Clifford and non-Clifford gates, that is comparable to one addition. This is in contrast to performing comparison and conditional subtraction separately, whose cost would be comparable to a controlled addition plus a regular addition.
  • PDF
    Fermionic Gaussian circuits can be simulated efficiently on a classical computer, but become universal when supplemented with non-Gaussian operations. Similar to stabilizer circuits augmented with non-stabilizer resources, these non-Gaussian circuits can be simulated classically using rank- or extent-based methods. These methods decompose non-Gaussian states or operations into Gaussian ones, with runtimes that scale polynomially with measures of non-Gaussianity such as the rank and the extent -- quantities that typically grow exponentially with the number of non-Gaussian resources. Current fermionic rank- and extent-based simulators are limited to Gaussian circuits with magic-state injection. Extending them to mixed states and non-unitary channels has been hindered by the lack of known extent-optimized decompositions for physically relevant gates and noisy channels. In this work, we address this gap. First, we derive analytic decompositions for key non-Gaussian gates and channels, including decompositions for arbitrary two-qubit fermionic gates which are provably optimal for diagonal gates or those acting on Jordan-Wigner-adjacent qubit pairs. Second, we show that stochastic Pauli noise can reduce the effective extent of non-Gaussian rotation gates, but that fermionic magic is substantially more robust to such noise than stabilizer magic. Finally, we demonstrate how these decompositions can accelerate classical sampling from the output distribution of a quantum circuit. This involves a generalization of existing sparsification methods, previously limited to convex-unitary channels, to circuits involving intermediate measurements and feed-forward. Our decompositions also yield speedups for emulating noisy Pauli rotations with quasiprobability simulators in the large-angle/arbitrary-strength-noise and small-angle/low-noise parameter regimes.
  • PDF
    In this work, we characterize the $t$-th order commutants of fermionic Gaussian unitaries and of their particle-preserving subgroup acting on $n$ fermionic modes. These commutants govern Haar averages over the corresponding groups and therefore play a central role in fermionic randomized protocols, invariant theory, and resource quantification. Using Howe dualities, we show that the particle-preserving commutant is generated by generalized copy-hopping operators, while that for general Gaussian commutant is generated by generalized quadratic Majorana bilinears together with parity. We then derive closed formulas for the dimensions of both commutants as functions of $t$ and $n$, and develop constructive Gelfand--Tsetlin procedures to obtain explicit orthonormal bases, with detailed low-$t$ examples. Our framework also clarifies the structure of replicated fermionic states and connects naturally to measures of fermionic correlations, generalized Plücker-type constraints, and the stabilizer entropy of fermionic Gaussian states. These results provide a unified algebraic description of higher-order invariants for fermionic Gaussian dynamics.
  • PDF
    We present applications of quantum quadratic residue codes in magic state distillation. This includes showing that existing codes which are known to distill magic states, like the $5$-qubit perfect code, the $7$-qubit Steane code, and the $11$-qutrit and $23$-qubit Golay codes, are equivalent to certain quantum quadratic residue codes. We also present new examples of quantum quadratic residue codes that distill qubit $T$ states and qutrit Strange states with high thresholds, and we show that there are infinitely many quantum quadratic residue codes that distill $T$ states with a non-trivial threshold. All of these codes, including the codes with the highest currently known thresholds for $T$ state and Strange state distillation, are unified under the umbrella of quantum quadratic residue codes.
  • PDF
    Incorporating sample efficiency, by requiring the number of states consumed by broadcasting does not exceed that of a naive prepare-and-distribute strategy, gives rise to the no practical quantum broadcasting theorem. To navigate this limitation, we introduce approximate and probabilistic virtual broadcasting and derive analytic expressions for their optimal sample complexity overheads. Allowing deviations at the receivers restores sample efficiency even in the 1-to-2 approximate setting, whereas probabilistic protocols obey a stronger no-go theorem that excludes all sample efficient 1-to-2 implementations for arbitrary dimension and success probability. Rather counterintuitive, this obstruction does not persist at larger receiver numbers: for qubit systems, practical 1-to-6 virtual broadcasting becomes attainable. These results elevate sample complexity from a technical constraint to a defining operational principle, opening an unexplored route to the efficient distribution of quantum information.
  • PDF
    We review the recent quantum advantage experiments by IBM, D-Wave, and Google, focusing on cases where efficient classical simulations of the experiment were demonstrated or attempted using tensor network methods. We assess the strengths and limitations of these tensor network-based approaches and examine how the interplay between classical simulation and quantum hardware has advanced both fields. Our goal is to clarify what these results imply for the next generation of quantum advantage experiments. We identify regimes and system features that remain challenging for current tensor network approaches, and we outline directions where improved classical methods could further raise the standard for claiming quantum advantage. By analyzing this evolving competition, we aim to provide a clear view of where genuine, scalable quantum advantage is most likely to emerge.
  • PDF
    Ergotropy, the maximum work extractable from a quantum system, is a central resource in quantum physics. Computing ergotropy is well established when the system state is fully known, but its estimation under partial information remains an open problem. Here we introduce a general certification framework that lower bounds ergotropy using only the expectation values of a limited set of arbitrary observables. The method naturally applies in the finite-statistics regime, yielding confidence-certified bounds that explicitly incorporate shot noise. We benchmark our approach on both synthetic data and experimental measurements from an IBM quantum processor. This establishes a robust and experimentally accessible tool for certifying extractable work in realistic quantum settings.
  • PDF
    We describe an empirical approach to identify low-weight combinations of columns of the decoding matrices of a quantum circuit-level noise model, for which belief-propagation (BP) algorithms converge possibly very slowly. Focusing on the logical-idle syndrome cycle of the low-density parity check gross code, we identify criteria providing a characterization of the Tanner subgraph of such low-weight error syndromes. We analyze the dynamics of iterations when BP is used to decode weight-four and weight-five errors, finding statistics akin to exponential activation in the presence of noise or escape from chaotic phase-space domains. We study how BP convergence improves when adding to the decoding matrix relevant combinations of fault columns, and show that the suggested decoder amendment can result in the reduction of both logical errors and decoding time.
  • PDF
    Whether the complex numbers of standard quantum theory are experimentally indispensable has remained open for decades. Real quantum theory (RQT), obtained by replacing complex amplitudes with real ones while retaining the usual Kronecker-product composition rule, reproduces all single-party and bipartite Bell correlations of quantum theory (QT), but its lack of local tomography suggested that the two theories might diverge in more general local experiments. This possibility appeared to be confirmed by Renou et al., who argued that a bilocal network experiment can falsify RQT without falsifying QT. Here we show that this conclusion relies on an experimentally untestable assumption. The key distinction is between product-state independence, which constrains the mathematical form of source states, and operational independence, which is defined entirely by the absence of observable cross-source correlations. We prove that, once source independence is imposed operationally, every finite network correlation achievable in QT is also achievable in RQT with the same locality structure of the measurements. We then extend this equivalence to arbitrary finite sequential multipartite protocols involving channels and measurements with prescribed locality structure. Thus, as long as no violation of QT is observed, RQT cannot be experimentally falsified. Our results restore the empirical indistinguishability of QT and RQT, while showing that they support markedly different pictures of the correlation structure underlying the same observed world.
  • PDF
    We introduce a generalized low-density parity-check decoding framework for quantum Tanner codes utilizing soft-output guessing random additive noise decoding (SOGRAND). By soft-output decoding entire component codes, we mitigate trapping sets and cycles, resulting in improved convergence. SOGRAND, combined with ordered statistic decoding (OSD) post-processing, outperforms the standard belief propagation plus OSD baseline by up to three orders of magnitude in logical error rate, providing a way forward for scalable decoding of the emerging class of Tanner-code-based quantum codes.
  • PDF
    We introduce list privacy amplification (LPA), a relaxation of the final step of quantum key distribution (QKD) in which Alice and Bob extract a list of $L$ candidate keys from a raw string correlated with an eavesdropper Eve, with the guarantee that at least one key is perfectly secret while Eve cannot identify which. This parallels list decoding in error-correcting codes: relaxing unique decoding to list decoding increases the decoding radius; analogously, list extraction increases achievable key length beyond the standard quantum leftover hash lemma (QLHL). Within the abstract cryptography framework, we formalise LPA and prove the \emphQuantum List Leftover Hash Lemma (QLLHL): an $L$-list of $\ell$-bit keys can be extracted from an $n$-bit source with smooth min-entropy $k$ iff \[ \ell \le k + \log L - 2\log(1/\epsilon) - 3, \]yielding a tight additive $\log L$ gain over QLHL. This gain arises because the index of the secure key is chosen after hashing and hidden from Eve, effectively contributing $\log L$ bits of entropy. Applying QLLHL to BB84-type QKD, a list size $L = 2^{\alpha n'}$ increases the tolerable phase-error threshold from $h^{-1}(1 - h(e_b))$ to $h^{-1}(1 - h(e_b) + \alpha)$, exceeding the standard $\approx 11\%$ bound for any $\alpha > 0$. We prove tightness via a matching intercept-resend attack, establish composability with Wegman--Carter authentication, and present two constructions: a polynomial inner-product hash over $\mathbb{F}_{2^m}$ and a Toeplitz-based variant, running in $O(nL)$ and $O(nL \log n)$ time.
  • PDF
    Nonlocal games provide application-level benchmarks for quantum hardware whose classical performance bounds are information-theoretic, holding against all classical strategies regardless of computational resources. We implement a 14-vertex graph coloring game, the smallest graph exhibiting a quantum-classical separation for this game type, on four trapped-ion quantum processors across three institutions. One system achieved a win rate that surpasses the classical bound with statistical significance, marking the first violation of a classical bound in a graph coloring nonlocal game on quantum hardware. The remaining systems achieved win rates comparable to the best superconducting processors evaluated on the same game, further illustrating the potential of nonlocal games as cross-architecture quantum benchmarks.
  • PDF
    Matrix product states (MPS) provide a powerful framework for characterizing one-dimensional symmetry-protected topological (SPT) phases of matter and for formulating Lieb-Schultz-Mattis (LSM)-type constraints. Here we generalize the MPS formalism to translationally invariant systems with general modulated symmetries. We show that the standard symmetry "push-through" condition for conventional global symmetry must be revised to account for symmetry modulation, and we derive the appropriate generalized condition. Using this generalized push-through structure, we classify one-dimensional SPT phases with modulated symmetries and formulate LSM-type constraints within the same MPS-based framework.
  • PDF
    Parameterized quantum circuits (PQCs) are central to quantum machine learning and near-term quantum simulation, but their scalability is often hindered by barren plateaus (BPs), where gradients decay exponentially with system size. Prior explanations, including expressivity, entanglement, locality, and noise, are often presented in ways that conflate two distinct issues: concentration of the measured observable and loss of parameter sensitivity caused by circuit dynamics. We develop a unified statistical framework that separates these mechanisms. We show that several standard BP explanations, including locality- and entanglement-related effects, can be understood through a single phenomenon that we term observable concentration (OC). Importantly, we prove that avoiding OC is necessary but not sufficient for trainability. Beyond OC, we identify two distinct mid-circuit sources of gradient suppression. First, mid-circuit information loss occurs when parameter perturbations propagate into degrees of freedom that are inaccessible to the final measurement, yielding little or no response. Second, mid-circuit information scrambling occurs when local perturbations rapidly spread across the system and become effectively undetectable on the measured subsystem. We support our theory with explicit constructions and numerical evidence, including quantum convolutional neural network architectures that exhibit information-loss-induced barren plateaus despite the absence of observable concentration.
  • PDF
    Fault-tolerant quantum computation demands extremely low logical error rates, yet superconducting qubit arrays are subject to radiation-induced correlated noise arising from cosmic-ray muon-generated quasiparticles. The quasiparticle density is unknown and time-varying, resulting in a mismatch between the true noise statistics and the priors assumed by standard decoders, and consequently, degraded logical performance. We formalize joint noise sensing and decoding using syndrome measurements by modeling the QP density as a latent variable, which governs correlation in physical errors and syndrome measurements. Starting from a variational expectation--maximization approach, we derive an iterative algorithm that alternates between QP density estimation and syndrome-based decoding under the updated noise model. Simulations of surface-code and bivariate bicycle quantum memory under radiation-induced correlated noise demonstrate a measurable reduction in logical error probability relative to baseline decoding with a uniform prior. Beyond improved decoding performance, the inferred QP density provides diagnostic information relevant to device characterization, shielding, and chip design. These results indicate that integrating physical noise estimation into decoding can mitigate correlated noise effects and relax effective error-rate requirements for fault-tolerant quantum computation.
  • PDF
    We demonstrate that absolutely maximally entangled (AME) states consisting of $N=4k$ qudits with $k\in\mathbb{N}_+$, each of even local dimension, cannot be realized as graph states. This result imposes strong constraints on AME states in composite local dimensions and characterizes the limitations of graph-state constructions for highly entangled multipartite quantum systems. In particular, this study provides an independent solution of the recently discussed case of the AME state of four quhexes and clarifies its characterization within the stabilizer formalism, complementing the results of Cha [arXiv:2603.13442].
  • PDF
    Fair threshold estimation for bivariate bicycle (BB) codes on the quantum erasure channel runs into two recurring problems: decoder-baseline unfairness and the conflation of finite-size pseudo-thresholds with true asymptotic thresholds. We run both uninformed and \empherasure-aware minimum-weight perfect matching (MWPM) surface code baselines alongside BP-OSD decoding of BB codes. With standard depolarizing-weight MWPM and no erasure information, performance matches random guessing on the erasure channel in our tested regime -- so prior work that compares against this baseline is really comparing decoders, not codes. Using 200,000 shots per point and bootstrap confidence intervals, we sweep five BB code sizes from $N=144$ to $N=1296$. Pseudo-thresholds (WER = 0.10) run from $p^* = 0.370$ to $0.471$; finite-size scaling (FSS) gives an asymptotic threshold $p^*_\infty \approx 0.488$, within 2.4\% of the zero-rate limit and without maximum-likelihood decoding. On the fair baseline, BB at $N=1296$ has a modest edge in threshold over the surface code at twice the qubit count, and a 12$\times$ lower normalized overhead -- the latter is where the practical advantage sits. All runs are reproducible from recorded seeds and package versions.
  • PDF
    The distance from calibration, introduced by Błasiok, Gopalan, Hu, and Nakkiran (STOC 2023), has recently emerged as a central measure of miscalibration for probabilistic predictors. We study the fundamental problems of computing and estimating this quantity, given either an exact description of the data distribution or only sample access to it. We give an efficient algorithm that exactly computes the calibration distance when the distribution has a uniform marginal and noiseless labels, which improves the $O(1/\sqrt{|\mathcal{X}|})$ additive approximation of Qiao and Zheng (COLT 2024) for this special case. Perhaps surprisingly, the problem becomes $\mathsf{NP}$-hard when either of the two assumptions is removed. We extend our algorithm to a polynomial-time approximation scheme for the general case. For the estimation problem, we show that $\Theta(1/\epsilon^3)$ samples are sufficient and necessary for the empirical calibration distance to be upper bounded by the true distance plus $\epsilon$. In contrast, a polynomial dependence on the domain size -- incurred by the learning-based baseline -- is unavoidable for two-sided estimation. Our positive results are based on simple sparsifications of both the distribution and the target predictor, which significantly reduce the search space for computation and lead to stronger concentration for the estimation problem. To prove the hardness results, we introduce new techniques for certifying lower bounds on the calibration distance -- a problem that is hard in general due to its $\textsf{co-NP}$-completeness.
  • PDF
    Do black holes possess entropy or do they create it? The dominant assumption is that they possess entropy, and a they evaporate that entropy is emitted and decreases. In this paper I use a model of a linear amplifier, in which I argue that the amplifier has not entropy and yet it emits entropy in the process of it operation. This model is closely related to behaviour of black holes, resulting in answer the question of that title that black holes do not have entropy, but nevertheless them create and emit entropy with the total entropy emitted being the same as the usual expression proportional to the square of the mas of the black hole.
  • PDF
    Photonic quantum computing has gained significant interest in recent years due to its potential for scaling to large numbers of qubits. A critical requirement for fault-tolerant quantum computation is the reliable generation of non-Gaussian quantum states, typically achieved using Gaussian operations and photon-number-resolving detectors. However, the probabilistic nature of quantum measurement typically results in low success rates for state preparation. Conventionally, these circuits are optimized to herald a single specific target outcome, thereby disregarding the potential utility of alternative measurement patterns generated by the same physical setup. In this work, we propose and demonstrate a multi-outcome optimization strategy that increases the overall acceptance probability by allowing a single circuit to produce useful quantum states across several measurement patterns. To evaluate this approach, we apply the framework to the generation of Gottesman-Kitaev-Preskill core states, Schrodinger cat states, binomial codes, and cubic phase states using both two-mode and three-mode Gaussian circuits. We demonstrate that the success probability can be enhanced through two distinct mechanisms: first, by simultaneously targeting a diverse set of useful resource states, and second, by aggregating degenerate outcomes to maximize the production rate of a single target state.
  • PDF
    The field of learning-augmented algorithms seeks to use ML techniques on past instances of a problem to inform an algorithm designed for a future instance. In this paper, we introduce a novel model for learning-augmented algorithms inspired by online learning. In this model, we are given a sequence of instances of a problem and the goal of the learning-augmented algorithm is to use prior instances to propose a solution to a future instance of the problem. The performance of the algorithm is measured by its average performance across all the instances, where the performance on a single instance is the ratio between the cost of the algorithm's solution and that of an optimal solution for that instance. We apply this framework to the classic $k$-median clustering problem, and give an efficient learning algorithm that can approximately match the average performance of the best fixed $k$-median solution in hindsight across all the instances. We also experimentally evaluate our algorithm and show that its empirical performance is close to optimal, and also that it automatically adapts the solution to a dynamically changing sequence.
  • PDF
    We introduce a measurement-induced quantum neural network (MINN), an adaptive monitored-circuit architecture in which mid-circuit measurement outcomes determine the entangling gates in subsequent layers. In contrast to standard monitored circuits where sites and gates are sampled randomly, the gates are parametrized and variational, producing correlated history-dependent dynamics and injecting nonlinearity through measurement back-action. A generic MINN is not expected to be efficiently classically simulable. To demonstrate feasibility, we study a matchgate MINN that admits exact fermionic simulation and can be trained with gradient estimators. We apply the architecture to continuous optimization, image classification, and ground-state search in the Sherrington-Kirkpatrick spin glass, finding effective training and performance over a broad range of monitoring rates.
  • PDF
    We investigate the quantum advantage that can arise in typical two-party communication scenarios, where the sender and the receiver are allowed to share prior correlations. Focusing on communication tasks constrained by the distinguishability of the sender's inputs, we demonstrate that entanglement-assisted communication, both classical and quantum, can outperform classical communication supplemented with shared randomness. We begin by developing a general framework for communication tasks with pre-shared correlations. We identify certain communication tasks that exhibit an advantage under entanglement assistance compared to classical communication. Through these results, we establish a connection between quantum communication and entanglement-assisted classical communication, and also show an equivalence between entanglement-assisted classical communication and entanglement-assisted quantum communication. We then consider the simplest scenarios in which the receiver has no input and demonstrate that entanglement-assisted strategies still offer advantages over both classical communication and quantum communication without prior entanglement. Finally, by constructing a class of communication tasks, we show that a non-maximally entangled state can, in some cases, be more useful than a maximally entangled state as a pre-shared resource.
  • PDF
    The standard approach to quantum measurements is to assume that they lead to effectively instantaneous collapse of the quantum state. However, if we assume that we are unable to enforce at what exact moment of time the measurement occurs due to a finite resolution of any time measurement device, at the level of the ensemble, the measurement would lead to an effectively nonunitary evolution involving a mixed state. Each individual ensemble member would face an instantaneous collapse at different moments of time. This process is completely indistinguishable from fundamental nonunitary evolution at the level of each individual ensemble member, within the framework of strong projective measurements. In this paper, we show that weak postselected measurements can distinguish these two types of evolution. An experimental protocol for determining the nature of quantum collapse is described, and the example of a hydrogen atom is analyzed in detail.
  • PDF
    We propose a more accurate variant of an algorithm for multiplying 4x4 matrices using 48 multiplications over any ring containing an inverse of 2. This algorithm has an error bound exponent of only log 4 $\gamma$$\infty$,2 $\approx$ 2.386. It also reaches a better accuracy w.r.t. max-norm in practice, when compared to previously known such fast algorithms. Furthermore, we propose a straight line program of this algorithm, giving a leading constant in its complexity bound of 387 32 n 2+log 4 3 + o n 2+log 4 3 operations over any ring containing an inverse of 2. Introduction: An algorithm to multiply two 4x4 complex-valued matrices requiring only 48 non-commutative multiplications was introduced in [16] 1 using a pipeline of large language models orchestrated by an evolutionary coding agent. A matrix multiplication algorithm with that many non-commutative multiplications is denoted by ___4x4x4:48___ in the sequel. An equivalent variant of the associated tensor decomposition defining this algorithm, but over the rationals (more precisely over any ring containing an inverse of 2), was then given in [8]. Most error analysis of sub-cubic time matrix multiplication algorithms [3, 4, 2, 1, 17] are given in the max-norm setting: bounding the largest output error as a function of the max-norm product of the vectors of input matrix coefficients. In this setting, Strassen's algorithm has shown the best accuracy bound, (proven minimal under some assumptions in [2]). In [6, 8], the authors relaxed this setting by shifting the focus to the 2-norm for input and/or output; that allowed them to propose a ___2x2x2:7___ variant with an improved accuracy bound. Experiments show that this variant performs best even when measuring the max-norm of the error bound. We present in this note a variant of the recent ___4x4x4:48___ algorithm over the rationals (again in the same orbit under De Groot isotropies [10]) that is more numerically accurate w.r.t. max-norm in practice. In particular, our new variant improves on the error bound exponent, from log 2 $\gamma$ $\infty$,2 $\approx$ 2.577 Consider the product of an M x K matrix A by a K x N matrix B. It is computed by a ___m, k, n___ algorithm represented by the matrices L, R, P applied recursively on ${\ell}$ recursive levels and the resulting m 0 x k 0 by k 0 x n 0 products are performed using an algorithm $\beta$. Here M = m 0 m ${\ell}$ , K = k 0 k ${\ell}$ and n = n 0 n ${\ell}$ . The accuracy bound below uses any (possibly different) p-norms and q-norms for its left-handside, ___$\bullet$___ p and right-hand side, ___$\bullet$___ q . The associated dual norms, are denoted by ___$\bullet$___ p $\star$ and ___$\bullet$___ q $\star$ respectively. Note that, these are vector norms, hence ___A___ p for matrix A in R mxn denotes ___Vect(A)___ p and is the p-norm of the mn dimensional vector of its coefficients, and not a matrix norm.
  • PDF
    Phase transitions in a modified Nishimori model, including the model considered by Kitatani, on a two-dimensional square lattice are investigated using a tensor-network-based sampling scheme. In this model, generating bond configurations is computationally demanding because of the correlated random interactions. The employed sampling method enables hierarchical and independent sampling of both bonds and spins. This approach allows high-precision calculations for system sizes up to $L=256$. The results provide clear numerical evidence that the spin-glass and ferromagnetic transitions are separated on the Nishimori line, supporting the existence of an intermediate Mattis-like spin-glass phase. This finding is consistent with the reentrant transition numerically observed in the two-dimensional Edwards-Anderson (EA) model. Furthermore, critical exponents estimated via finite-size-scaling analysis indicate that the universality class of the transitions differs from that of the standard independent and identically distributed EA model.
  • PDF
    Multi-qubit quantum sensors are rapidly emerging as platforms that extend the capabilities of conventional single-qubit sensing. In this work we show how suitable pulse sequences applied to a two-qubit sensor enable separate extraction of the response and noise of a probed environment within a $T_2$ spectroscopy framework. By resorting to representative examples, we demonstrate that this approach can resolve the spatio-temporal spreading of correlations in a many-body system. In particular, the resulting correlated dephasing signal captures features such as the dispersion of low-energy excitations, which manifest as light-cone-like profiles in the propagation of correlations. We further show that non-equilibrium conditions, for instance those induced by external driving, can modify this profile by producing additional fringes outside the light-cone. As a complementary application, we demonstrate that the method clearly distinguishes between different transport regimes in the system, including ballistic spreading, diffusive broadening, and the crossover between them.
  • PDF
    Online learning in arbitrary, and possibly adversarial, environments has been extensively studied in sequential decision-making, and it is closely connected to equilibrium computation in game theory. Most existing online learning algorithms rely on \emphnumeric utility feedback from the environment, which may be unavailable in human-in-the-loop applications and/or may be restricted by privacy concerns. In this paper, we study an online learning model in which the learner only observes a \emphranking over a set of proposed actions at each timestep. We consider two ranking mechanisms: rankings induced by the \emphinstantaneous utility at the current timestep, and rankings induced by the \emphtime-average utility up to the current timestep, under both \emphfull-information and \emphbandit feedback settings. Using the standard external-regret metric, we show that sublinear regret is impossible with instantaneous-utility ranking feedback in general. Moreover, when the ranking model is relatively deterministic, \emphi.e., under the Plackett-Luce model with a temperature that is sufficiently small, sublinear regret is also impossible with time-average utility ranking feedback. We then develop new algorithms that achieve sublinear regret under the additional assumption that the utility sequence has sublinear total variation. Notably, for full-information time-average utility ranking feedback, this additional assumption can be removed. As a consequence, when all players in a normal-form game follow our algorithms, repeated play yields an approximate coarse correlated equilibrium. We also demonstrate the effectiveness of our algorithms in an online large-language-model routing task.
  • PDF
    Quantum block encoding (QBE) is a crucial step in the development of most quantum algorithms, as it provides an embedding of a given matrix into a suitable larger unitary matrix. Historically, the development of efficient techniques for QBE has mostly focused on sparse matrices; less effort has been devoted to data-sparse (e.g., rank-structured) matrices. In this work we examine a particular case of rank structure, namely, one-pair semiseparable matrices. We present a new block encoding approach that relies on a suitable factorization of the given matrix as the product of triangular and diagonal factors. To encode the matrix, the algorithm needs $2\log(N)+7$ ancillary qubits. This process takes polylogarithmic time and has an error of $\mathcal{O}(N^2)$, where $N$ is the matrix size.
  • PDF
    We introduce and characterize different models for an active quantum particle where activity arises from engineered dissipation-- specifically, from a suitably coupled nonequilibrium environment. These include a model of a particle moving on a lattice with coherent and dissipative hopping, as well as quantum generalizations of well-studied models of active behavior, such as the active Ornstein-Uhlenbeck process, run-and-tumble dynamics, and the active Brownian particle. Despite the different microscopic mechanisms at play, we show that all these models display key features of active motion. Notably, we observe a crossover from diffusive to active-diffusive behavior at long times, leading to an effective Péclet number, as well as a strong sensitivity to boundary conditions which, in our open quantum system context, arises from the Liouville skin effect. We discuss the role of quantum fluctuations and experimental realizations with superconducting circuits or cold gases, closing with perspectives for many-body effects in quantum active matter.
  • PDF
    Models of interacting quantum spins are used in many areas of physics ranging from the study of magnetism and strongly correlated materials to quantum sensing. In this work, we study coherent many-body dynamics of interacting spin models realized using polar molecules trapped in rearrangeable optical tweezer arrays. Specifically, we encode quantum spins in long-lived rotational states and use the electric dipolar interaction between molecules, together with Floquet Hamiltonian engineering, to realize $1/r^3$ XXZ and XYZ models. We microscopically probe several types of coherent dynamics in these models, including quantum walks of single spin excitations, the emergence of magnon bound states, and coherent creation and annihilation of magnon pairs. Our results establish molecular tweezer arrays as a new quantum simulation platform for interacting quantum spin models.
  • PDF
    Chemistry and materials science are widely regarded as potential killer application fields for quantum hardware. While the dream of unlocking unprecedented simulation capabilities remains compelling, quantum algorithm development must adapt to the evolving constraints of the emerging quantum hardware in order to accomplish any advantage for the computational chemistry practice. At the same time, the continuous advancement of classical wavefunction-theory methods narrows the window for a broad quantum advantage. Here, we explore potential benefits of quantum computation from the broader perspective of utility-scale applications. We argue that quantum algorithms need not only enable accurate calculations for a few challenging, that is strongly correlated, molecular structures, that might be hard to describe with traditional methods. Instead, they must also support the practical integration of quantum-accelerated computations into high-throughput pipelines for routine calculations on arbitrary molecules, ultimately delivering a tangible value to society.
  • PDF
    The short quantum link regime, where the photon travel time $\tau$ is comparable to the emitter lifetime $1/\gamma$, is experimentally relevant but theoretically underexplored: existing few-mode descriptions lose validity as retardation and multimode effects become significant. Using a Delay Differential Equation (DDE) framework that admits exact analytical solutions from the single-mode cavity limit to the multimode waveguide continuum, we show that emitters coupled to a short link spontaneously lock into self-synchronized Rabi oscillations driven by coherent photon echoes, breaking the link's discrete time-displacement symmetry. The resulting spectral structure -- persistent quasi-dark states and vacuum Rabi splitting, including in the superstrong coupling regime -- enables efficient quantum state transfer (QST): benchmarking three protocols across the full $\gamma\tau$ parameter space, we find that STIRAP exploits the quasi-dark-state structure to achieve a quadratic infidelity floor $\mathcal{O}((\gamma\tau)^2)$, outperforming both SWAP (linear error $\mathcal{O}(\gamma\tau)$) and wavepacket engineering for $\gamma\tau \lesssim 1.44$, even in regimes where retardation cannot be neglected. These results establish photon-echo synchronization as an engineering resource for quantum state transfer, with DDE modeling providing the exact analytical predictions needed to design and optimize short-link experiments on current circuit-QED hardware.
  • PDF
    These notes are based on lectures delivered by G. Schehr at the XVIth School on Fundamental Problems in Statistical Physics (FPSP), held in Oropa (Italy) from 30 June to 11 July 2025. After a brief introduction to extreme value statistics (EVS) for independent and identically distributed (IID) random variables, we discuss several paradigmatic examples of strongly correlated systems where classical extreme value theory no longer applies. In particular, we focus on time series generated by random walks and Brownian motion, as well as on eigenvalue statistics in random matrix theory. Emphasis is placed on applications of EVS to fundamental problems in statistical physics and disordered systems, including the Random Energy Model, stochastic search problems, as well as fluctuating interfaces, and directed polymers in random media within the Kardar-Parisi-Zhang universality class.
  • PDF
    Quantum metrology enables parameter estimation beyond classical limits by exploiting nonclassical resources such as squeezing and entanglement. In distributed quantum sensing, Heisenberg scaling has been extended from $1/N^2$ to $1/(NM)^2$ through entanglement across both particles and spatial modes, where $N$ denotes the photon number and $M$ the number of spatially distributed modes. However, the overall sensitivity has remained limited to linear scaling with the number of measurement repetitions $R$. Here, we show that exploiting entanglement across temporal modes via time-domain multiplexing enables a scaling advantage with respect to $R$. As a result, the sensitivity can asymptotically approach simultaneous Heisenberg scaling in photons, spatial modes, and repetitions, yielding an overall sensitivity approaching $\Delta^2 \phi \propto 1/(NMR)^2$. Using the Bogoliubov transformation formalism, we prove the optimality of the protocol within the class of Gaussian states and show that the scaling is realizable via homodyne detection and maximum-likelihood estimation. We further show that the advantage persists under optical loss and propose an experimentally feasible loop-based photonic sensing scheme. Our results open a route to incorporating time-multiplexing techniques into quantum metrology.
  • PDF
    The integration of diverse quantum resources and the exploitation of more degrees of freedom provide key operational flexibility for universal fault-tolerant quantum computation. In this work, we propose a flexible Gottesman-Kitaev-Preskill-state-embedded fault-tolerant quantum computation architecture based on a three-dimensional cluster state constructed in polarization, frequency, and orbital angular momentum domains. Specifically, we design optical entanglement generators to produce three diverse entangled pairs, and subsequently construct a three-dimensional cluster state via a beam-splitter network with several time delays. Furthermore, we present a partially squeezed surface-GKP code to achieve fault-tolerant quantum computation and ultimately find the optimal choice of implementing the squeezing gate to give the best fault-tolerant performance (the fault-tolerant squeezing threshold is 11.5 dB). Our scheme is flexible, scalable, and experimentally feasible, providing versatile options for future optical fault-tolerant quantum computation architecture.
  • PDF
    We study auction design in the celebrated interdependence model introduced by Milgrom and Weber [1982], where a mechanism designer allocates a good, maximizing the value of the agent who receives it, while inducing truthfulness using payments. In the lesser-studied procurement auctions, one allocates a chore, minimizing the cost incurred by the agent selected to perform it. Most of the past literature in theoretical computer science considers designing truthful mechanisms with constant approximation for the value setting, with restricted domains and monotone valuation functions. In this work, we study the general computational problems of optimizing the approximation ratio of truthful mechanism, for both value and cost, in the deterministic and randomized settings. Unlike most previous works, we remove the domain restriction and the monotonicity assumption imposed on value functions. We provide theoretical explanations for why some previously considered special cases are tractable, reducing them to classical combinatorial problems, and providing efficient algorithms and characterizations. We complement our positive results with hardness results for the general case, providing query complexity lower bounds, and proving the NP-Hardness of the general case.
  • PDF
    We study how to construct compressed datasets that suffice to recover optimal decisions in linear programs with an unknown cost vector $c$ lying in a prior set $\mathcal{C}$. Recent work by Bennouna et al. provides an exact geometric characterization of sufficient decision datasets (SDDs) via an intrinsic decision-relevant dimension $d^\star$. However, their algorithm for constructing minimum-size SDDs requires solving mixed-integer programs. In this paper, we establish hardness results showing that computing $d^\star$ is NP-hard and deciding whether a dataset is globally sufficient is coNP-hard, thereby resolving a recent open problem posed by Bennouna et al. To address this worst-case intractability, we introduce pointwise sufficiency, a relaxation that requires sufficiency for an individual cost vector. Under nondegeneracy, we provide a polynomial-time cutting-plane algorithm for constructing pointwise-sufficient decision datasets. In a data-driven regime with i.i.d.\ costs, we further propose a cumulative algorithm that aggregates decision-relevant directions across samples, yielding a stable compression scheme of size at most $d^\star$. This leads to a distribution-free PAC guarantee: with high probability over the training sample, the pointwise sufficiency failure probability on a fresh draw is at most $\tilde{O}(d^\star/n)$, and this rate is tight up to logarithmic factors. Finally, we apply decision-sufficient representations to contextual linear optimization, obtaining compressed predictors with generalization bounds scaling as $\tilde{O}(\sqrt{d^\star/n})$ rather than $\tilde{O}(\sqrt{d/n})$, where $d$ is the ambient cost dimension.
  • PDF
    Traversable wormhole teleportation in the Sachdev-Ye-Kitaev (SYK) model links quantum channel integrity to black hole interior dynamics, using teleportation fidelity to probe holographic scrambling. We subject the SYK boundary to a gravitational-wave (GW)-inspired periodic Floquet deformation, mimicking a leading-order metric-strain perturbation from the JT-gravity dictionary. We characterize the channel response via exact numerical time evolution with disorder averaging at $\beta J = 2$. The drive produces a coherent, frequency-selective fidelity suppression, yielding four main results: (i) two amplitude regimes separated near $\varepsilon \sim J$ (perturbative sensing vs.\ strong-drive); (ii) the channel acts as a low-pass filter, most sensitive at $\omega \lesssim \beta^{-1}$ with monotone recovery above the thermal scale; (iii) an inspiral chirp drive delays the fidelity peak by $\Delta t_{\rm scr}^{(\rm fid)} = +0.11\, J^{-1}$, corroborated by an out-of-time-order correlator (OTOC) diagnostic ($\Delta t_{\rm scr}^{(\rm OTOC)} = +0.20\, J^{-1}$), establishing a genuine scrambling delay; and (iv) the effects persist across $N \in \{10, 12, 14, 16\}$ Majorana modes, indicating no systematic finite-size suppression. These results establish that holographic teleportation channels degrade gracefully under GW-inspired boundary deformations, with direct implications for near-term quantum processor implementations of traversable wormholes.
  • PDF
    The Kidney Exchange Problem is a prominent challenge in healthcare and economics, arising in the context of organ transplantation. It has been extensively studied in artificial intelligence and optimization. In a kidney exchange, a set of donor-recipient pairs and altruistic donors are considered, with the goal of identifying a sequence of exchange -- comprising cycles or chains starting from altruistic donors -- such that each donor provides a kidney to the compatible recipient in the next donor-recipient pair. Due to constraints in medical resources, some limits are often imposed on the lengths of these cycles and chains. These exchanges create a network of transplants aimed at maximizing the total number, $t$, of successful transplants. Recently, this problem was deterministically solved in $O^*(14.34^t)$ time (IJCAI 2024). In this paper, we introduce the representative set technique for the Kidney Exchange Problem, showing that the problem can be deterministically solved in $O^*(6.855^t)$ time.
  • PDF
    We establish Kirchberg's Local Lifting Property and Lubotzky-Shalom's Property FD for classes of finitely generated groups of central importance in geometric and combinatorial group theory: $3$-manifold groups, limit groups, and certain one-relator groups and right-angled Artin groups. We deduce that such groups are very flexibly stable, with respect to normalized unitarily invariant norms. The exposition is made accessible to operator algebraists and group theorists alike.
  • PDF
    We study the cosmological implications of the minimal non-linear realisation of scale invariance within the Standard Model (SM). This framework provides a technically natural explanation for the hierarchy between the Planck scale and the electroweak scale and introduces only a light, feebly coupled dilaton field beyond the SM particles. Although the model is almost indistinguishable from the minimal SM at low energies, its cosmological consequences differ dramatically. In particular, the electroweak Higgs field remains trapped in the symmetric phase until the Universe cools to very low temperatures, $T_c^{(\chi)}\sim 28$ MeV, where the first-order QCD chiral symmetry-breaking phase transition triggers the electroweak phase transition. This scenario offers intriguing possibilities for the production of primordial black holes, low-frequency gravitational waves, and multi-quark and lepton nuggets, which we explore in some detail using simplified approximations.
  • PDF
    We demonstrate that $k$-Markov sequences of unitary gates provide low-cost handles to manipulate the rate and structure of information spreading compared to traditional random, 0-Markov, circuits. For SWAP gates and brickwork circuits, we use graph cover time to demonstrate how $k$-Markov processes can be used to control operator transport. With SWAP gates and the set of Clifford gates that can change operator weight, we show how $k$-Markov sequences can be used to manipulate scrambling time and generate novel structures of spatial-temporal correlations across a qubit network. We show that $k$-Markov circuits constructed from PSWAP gates at fixed angle are equivalent to standard brickwork circuits with PSWAP angle drawn from non-uniform distributions generated by the $k$-Markov process. In those circuits, the time evolution of the average Hamming weight and the space-time correlation structure after equilibrium again vary significantly from the 0-Markov case, depending on the transition probabilities of the process.
  • PDF
    It is known that preprocessing noise may boost quantum key distribution by expanding the range of values of tolerated noise. For BB84, adding trusted noise may allow the generation of secret keys even for qubit error rate (QBER) beyond the 11% threshold in the asymptotic regime. Here we study the effect of preprocessing noise in the finite-size regime where only a limited number of signals are exchanged between Alice and Bob. We compute tight numerical lower bounds in terms of the sandwiched Rényi entropy of order alpha, optimized via a two-step Frank-Wolfe algorithm, in the presence of a trusted flipping probability q. We find that trusted noise improves the key rate only for a finite interval of alpha, from the alpha -> 1 limit up to alpha approx 1.4. By optimizing on the value of alpha, we determine finite-size key rates for different values of the QBER, observing enhancement due to trusted noise both in asymptotic and finite-size regimes. Finally, we determine the maximum tolerable QBER as a function of the block size.
  • PDF
    While Multimodal Large Language Models demonstrate impressive semantic capabilities, they often suffer from spatial blindness, struggling with fine-grained geometric reasoning and physical dynamics. Existing solutions typically rely on explicit 3D modalities or complex geometric scaffolding, which are limited by data scarcity and generalization challenges. In this work, we propose a paradigm shift by leveraging the implicit spatial prior within large-scale video generation models. We posit that to synthesize temporally coherent videos, these models inherently learn robust 3D structural priors and physical laws. We introduce VEGA-3D (Video Extracted Generative Awareness), a plug-and-play framework that repurposes a pre-trained video diffusion model as a Latent World Simulator. By extracting spatiotemporal features from intermediate noise levels and integrating them with semantic representations via a token-level adaptive gated fusion mechanism, we enrich MLLMs with dense geometric cues without explicit 3D supervision. Extensive experiments across 3D scene understanding, spatial reasoning, and embodied manipulation benchmarks demonstrate that our method outperforms state-of-the-art baselines, validating that generative priors provide a scalable foundation for physical-world understanding. Code is publicly available at https://github.com/H-EmbodVis/VEGA-3D.
  • PDF
    The ability to render scenes at adjustable fidelity from a single model, known as level of detail (LoD), is crucial for practical deployment of 3D Gaussian Splatting (3DGS). Existing discrete LoD methods expose only a limited set of operating points, while concurrent continuous LoD approaches enable smoother scaling but often suffer noticeable quality degradation at full capacity, making LoD a costly design decision. We introduce Matryoshka Gaussian Splatting (MGS), a training framework that enables continuous LoD for standard 3DGS pipelines without sacrificing full-capacity rendering quality. MGS learns a single ordered set of Gaussians such that rendering any prefix, the first k splats, produces a coherent reconstruction whose fidelity improves smoothly with increasing budget. Our key idea is stochastic budget training: each iteration samples a random splat budget and optimises both the corresponding prefix and the full set. This strategy requires only two forward passes and introduces no architectural modifications. Experiments across four benchmarks and six baselines show that MGS matches the full-capacity performance of its backbone while enabling a continuous speed-quality trade-off from a single model. Extensive ablations on ordering strategies, training objectives, and model capacity further validate the designs.

Recent comments

Andru Gheorghiu Mar 20 2026 16:42 UTC

Steve nose what's up.

Steve Flammia Mar 19 2026 13:19 UTC

Smell Inequalities?

Jahan Claes Mar 17 2026 15:33 UTC

I have a question about Appendix A2, where you're constructing the Pauli envelope. In Step 2, why do you need to retain the reset operation before the measurement result? If I am measuring in the $Z$ basis, the reset seems redundant, because I've already projected the leaked qubit in $|0\rangle$ wit

...(continued)
George Umbrarescu Mar 17 2026 02:47 UTC

We will update the abstract in a future version of the paper to make the wording clearer. Thank you for your interest and for your previous work on this topic!

Kwok Ho Mar 17 2026 02:25 UTC

Interesting results! Just want to point to our approach: https://scirate.com/arxiv/2509.08658, we believe we can simulate the $d=5$ cultivation circuit given more computational resources (such as a single gpu).

John Ye Mar 16 2026 22:01 UTC

I get all results with more time budget for Scaler now and I write a Blog to further explain how we get data and result in the paper [Blog: challenge-towards-accurately-testing-qec-at-scale][1].

After running for 96 hours, ScaLER use 3.41 e9 samples and get an estimation 7.843e-12. This is 186 t

...(continued)
Noah Shutty Mar 16 2026 18:22 UTC

Cool, thanks for the clarifications!

Noah Shutty Mar 16 2026 16:17 UTC

Got it, thanks for clarifying the n/m=o(1) regime!

Gokul Subramanian Ravi Mar 13 2026 17:40 UTC

Congrats! Glad to see this excellent expansion on our [DS-ZNE][1] work.

As a minor comment, it would be great if could alter the following line in your abstract, "*This is distinct from some alternative approaches, as QEC is here used as a subroutine inside the QEM framework, while other proposa

...(continued)