Top arXiv papers

sign in to customize
  • PDF
    Creation and manipulation of entanglement with low error is essential in quantum information systems. In practice, two-qubit entangling gates constitute a dominant error source, limiting circuit depths and performance in fault-tolerant architectures. Using a neutral-atom quantum processor, we realize entangling CZ gates with a high Rabi frequency smooth-amplitude pulse, employing state-selective readout and qubit reuse for fast calibration, and achieve state-of-the-art fidelities of 99.854(4)% which improve to 99.941(3)% upon loss postselection, with stable performance for 10 hours. We then use these low-error gates in quantum circuits with coherent atom rearrangement. We first benchmark performance by creating and disentangling cluster states, and subsequently implement scrambling circuits featuring longer-range connectivity to study non-locally entangled states generated through chaotic dynamics. These results pave the way towards deep-circuit, efficient fault-tolerant quantum computation.
  • PDF
    Establishing the precise computational boundary between classically tractable fermionic systems and those capable of genuine quantum advantage is a central challenge in quantum simulation. While injecting non-Gaussian ``magic" inputs into free-fermion circuits is widely expected to generate intractable complexity, we identify a physically motivated intermediate regime. Supported by rigorous bounds and numerical evidence, we show that for a class of paired non-Gaussian fermionic states, essential quantum simulation primitives -- transition amplitudes, overlaps, and arbitrary-weight number correlators -- can be efficiently approximated to additive error under free-fermionic dynamics. This tractability stems from an algebraic reduction that compresses exponentially large multiparticle interference into a single coefficient of a multivariate Pfaffian polynomial. Because these classical estimators match the intrinsic $O(1/\sqrt{K})$ statistical uncertainty of quantum hardware utilizing $K$ measurement shots, they constitute a practical benchmark. Building on this foundation, we construct an additive-error estimator for high-weight Wilson observables in the noninteracting quench of recent trapped-ion experiments, providing a rigorous classical benchmark. Extending this to quantum chemistry, we demonstrate that core overlap-based subroutines for antisymmetrized products of strongly orthogonal geminals admit exact Pfaffian reductions. Ultimately, these results sharpen the boundary of quantum advantage, establishing that the paired-electron scaffold is effectively dequantized and clarifying exactly where quantum resources are indispensable.
  • PDF
    We introduce a quantum algorithm for simulating the dynamics of electrical circuits consisting of resistors, inductors and capacitors (aka RLC circuits) along with power sources. Given oracle access to the connectivity of the circuit and values of the electrical elements, our algorithm prepares a quantum state that encodes voltages and current values either at a specified time or the history of their evolution over a time-interval. For an RLC circuit with $N$ components, our algorithm runs in time $\textsf{polylog}(N)$ under mild assumptions on the connectivity of the circuit and values of its components. This provides an exponential speed-up over classical algorithms that take $\textsf{poly}(N)$ time in the worst-case. Our algorithm can be used to estimate energy across a set of components or dissipated power in $\textsf{polylog}(N)$ time, a problem that we prove is BQP-hard and therefore unlikely to be efficiently solved by classical algorithms. The main challenge in simulating the dynamics of RLC circuits is that they are governed by differential-algebraic equations (DAEs), a coupled system of differential equations with hidden algebraic constraints. Consequentially, existing quantum algorithms for ordinary differential equations cannot be directly utilized. We therefore develop a quantum DAE solver for simulating the time-evolution of linear DAEs. For RLC circuits, we employ modified nodal analysis to create a system of DAEs compatible with our quantum algorithm. We establish BQP-hardness by demonstrating that any network of classical harmonic oscillators, for which an energy-estimation problem is known to be BQP-hard, is a special case of an LC circuit. Our work gives theoretical evidence of quantum advantage in simulating RLC circuits and we expect that our quantum DAE solver will find broader use in the simulation of dynamical systems.
  • PDF
    This work investigates which sets of quantum states give rise to the highest achievable success probability in minimum-error state discrimination if multiple copies of the unknown state are given. Specifically, we consider uniformly distributed ensembles of the form $\left\{\frac{1}{N},\rho_i^{\otimes k}\right\}_{i=1}^N$, where $N$ states in dimension $d$ are provided in $k$ identical copies, and derive universal limits in this scenario. For pure state ensembles, we prove that whenever $N$ is large enough to support a state $k$-design, these designs will exactly give rise to the maximally discriminable sets. We further show that when $N$ exceeds the size required for a $k$-design, mixed states can outperform all pure state ensembles. We also analyse the analogue classical discrimination problems, in which states are replaced by probability distributions. We recognise that the problem of most discriminable classical states in the multi-copy regime is in one-to-one correspondence to the concept of the multiplicative Bayes capacity of independent uses of classical channels, a concept that emerges naturally in the context of classical information leakage. This connection allows us to completely solve the classical analogue of our problem when $N\geq \binom{d + k - 1}{k}$, and to prove that quantum systems offer a quadratic advantage (in number of copies $k$) over classical ones. Curiously, we also show that this quantum advantage is strongly reduced when one is restricted to real quantum states. Finally, we introduce computational techniques to find sets of most discriminable ensembles, and to obtain rigorous universal upper bounds on the maximal success probability for multi-copy state discrimination in cases that are analytically intractable.
  • PDF
    We consider the problem of quantum channel certification to unitary, where one is given access to an unknown $d$-dimensional channel $\mathcal{E}$, and wants to test whether $\mathcal{E}$ is equal to a target unitary channel or is $\varepsilon$-far from it in the diamond norm. We present optimal quantum algorithms for this problem, settling the query complexities in three access models with increasing power. Specifically, we show that: (i) $\Theta(d/\varepsilon^2)$ queries suffice for incoherent access model, matching the lower bound due to Fawzi, Flammarion, Garivier, and Oufkir (COLT 2023). (ii) $\Theta(d/\varepsilon)$ queries suffice for coherent access model, matching the lower bound due to Regev and Schiff (ICALP 2008). (iii) $\Theta(\sqrt{d}/\varepsilon)$ queries suffice for source-code access model, matching the lower bound due to Jeon and Oh (npj Quantum Inf. 2026). This demonstrates a strict hierarchy of complexities for quantum channel certification to unitary across various access models.
  • PDF
    The pure-loss channel is a fundamental model for describing noise in bosonic quantum platforms. It is characterised by a single parameter, the transmissivity, which quantifies the fraction of the input energy that reaches the output of the channel. In realistic scenarios, however, such as free-space quantum communication, the transmissivity is not fixed but fluctuates from one channel use to another. In this setting, the overall channel is effectively described as a convex combination of pure-loss channels, known as a fading channel. Despite its practical relevance, the quantum Shannon theory of the fading channel has remained largely unexplored. Here, we address this gap, specifically investigating degradability, anti-degradability, entanglement breakingness, and capacities of the fading channel. Of particular relevance to practical quantum-internet applications, we prove that entanglement distribution and quantum key distribution can always be achieved at a strictly positive rate over any fading channel, no matter how noisy it is or how strongly the transmissivity fluctuates, provided the channel is not completely noisy. Moreover, we prove that thermal states, which are optimal for a broad class of static bosonic Gaussian channels, fail to achieve the entanglement-assisted classical capacity of fading channels: non-Gaussian Fock-diagonal states strictly outperform all Gaussian encodings. Most strikingly, we identify regimes where the coherent information of thermal inputs vanishes, while optimized non-Gaussian states achieve strictly positive values, thereby activating the channel for quantum communication. For a paradigmatic binary fading model we establish this result analytically, deriving the exact capacity-achieving state in closed form. For general fading distributions, we design an iterative variational algorithm to optimize the coherent and mutual information.
  • PDF
    We study the power of quantum witnesses under perfect completeness. We construct a classical oracle relative to which a language lies in $\mathsf{QMA}_1$ but not in $\mathsf{QCMA}$ when the $\mathsf{QCMA}$ verifier is only allowed polynomially many adaptive rounds and exponentially many parallel queries per round. Additionally, we derandomize the permutation-oracle separation of Fefferman and Kimmel, obtaining an in-place oracle separation between $\mathsf{QMA}_1$ and $\mathsf{QCMA}$. Furthermore, we focus on $\mathsf{QCMA}$ and $\mathsf{QMA}$ with an exponentially small gap, where we show a separation assuming the gap is fixed, but not when it may be arbitrarily small. Finally, we derive consequences for approximate ground-state preparation from sparse Hamiltonian oracle access, including a bounded-adaptivity frustration-free variant.
  • PDF
    It is a well-established fact that some quantum correlations can be nonlocal, meaning that they cannot be described by a local hidden variable model. Certain quantum correlations have a form of nonlocality so strong that they cannot be reproduced even by models having an arbitrarily small local hidden variable component. These correlations are called fully nonlocal and lead to Bell inequalities in which the maximum quantum value saturates the non-signaling bound. A well-known example of this effect, which is also referred to as quantum pseudo-telepathy or all-versus-nothing proofs of nonlocality, is the quantum distribution fulfilling the Peres-Mermin square, in which the underlying state is a $4\times4$ dimensional maximally entangled state. Other examples of full nonlocality are known but, so far, all of them are for maximally entangled states and it is an open question whether maximal entanglement is necessary for full nonlocality. In this work, we first establish a link between full nonlocality and the concept of antidistinguishability of quantum states. We use this connection to show that in every bipartite $d\times d$ Hilbert space, with $d\geq3$, there are non-maximally entangled states that are fully nonlocal. In fact, we derive simple sufficient conditions for full nonlocality that are only based on the smallest and largest Schmidt coefficients. We also show that in every dimension there exist pure entangled states that do not exhibit full nonlocality. Finally, we show that all pure entangled states can be activated to show full nonlocality in the many-copy scenario.
  • PDF
    Simulating quantum dynamics is one of the central applications of quantum computing. For Hamiltonians written as a sum of many terms, deterministic Trotter--Suzuki product formulas can require applying a large number of term-wise evolutions at each time step, leading to high circuit costs for large or dense systems. Randomized methods such as qDRIFT offer an alternative: each step samples only one Hamiltonian term, giving a circuit depth with no explicit dependence on the number of terms. However, when qDRIFT is used for observable estimation, high precision requires many independent random circuit realizations, resulting in a total gate complexity that scales as $\mathcal{O}(\varepsilon^{-3})$. We introduce a multilevel Monte Carlo framework for qDRIFT that reduces this sampling overhead. The method constructs a hierarchy of qDRIFT estimators with increasing circuit depths and couples adjacent levels by sharing their random Hamiltonian-term samples. This coupling makes the variance of the level differences decay with depth, allowing most samples to be taken on cheaper, coarse circuits and only a few on expensive, fine circuits. We prove that the resulting MLMC-qDRIFT estimator reduces the total gate complexity for fixed-precision observable estimation from the standard qDRIFT scaling $\mathcal{O}(\varepsilon^{-3})$ to $\mathcal{O}(\varepsilon^{-2}\log^2(1/\varepsilon))$, while preserving qDRIFT's lack of explicit dependence on the number of Hamiltonian terms. Numerical experiments for spin-chain dynamics confirm the predicted variance decay and demonstrate the practical gate-count savings of the multilevel construction.
  • PDF
    Finite local Hilbert-space truncations arise naturally in quantum simulations of lattice field theories and motivate qudit encodings, but their fault-tolerant advantage over qubit encodings remains unclear. We compare the non-Clifford cost of implementing quadratic diagonal evolutions, exemplified by $U=e^{-it\phi_x^2}$ in a uniform field-amplitude discretization of a real scalar field, using either one logical $d$-level qudit or $n_b=\lceil \log_2 d\rceil$ logical qubits. We analyze two standard settings: product-formula simulation and LCU/block encoding, taking the resource metric to be the number of non-Clifford gates after synthesis into a discrete logical gate set. Because tight synthesis bounds for general single-qudit rotations are not known, we express the qudit constructions in terms of embedded two-level $SU(2)$ rotations and derive explicit finite-$d$ break-even conditions for their synthesis cost; these serve as compiler targets for when qudit encodings can outperform the qubit baseline. Within the constructive models studied here, product-formula implementations would require an exponentially stronger per-primitive synthesis advantage for qudits to win asymptotically, while in the LCU setting the qubit encoding is asymptotically cheaper in $d$. Nevertheless, the finite-$d$ threshold analysis identifies low dimensional regions in which qudits can yield meaningful constant-factor savings, particularly for LCU-based implementations. As a secondary analysis of the LCU construction, we use an idealized negligible-overhead qubit-qudit code-switching model to give an absolute $T$-count comparison, and reinterpret the savings as an allowable per-switch overhead budget.
  • PDF
    Schroedinger's equation gave early quantum theory a visual language that looked like physics again: a wave evolving by a linear differential equation. This essay argues that the same success also seeded a recurring impulse to keep quantum theory "classical-looking" by treating the wave function as a physical wave. Schroedinger quickly realized that, for many-particle systems, the wave function is naturally defined on configuration space rather than ordinary physical space, blocking any straightforward reading of it as a literal classical wave. Read through Mach and Boltzmann, who shaped his intellectual outlook most deeply, his achievement appears double-edged: it provided an extraordinarily powerful picture for calculation and discovery, while also warning against taking that picture too literally. I argue that this tension never fully disappeared. It still reappears in modern physics whenever the wave function, or in quantum field theory the field itself, is treated as ontology rather than as part of a representation tied to measurement and observational context, a point sharpened by Bell-type no-go theorems. The centenary moral is: use pictures boldly, but demote them ontologically.
  • PDF
    One of the main difficulties in preparing many-body ground states is achieving the target state through simple counterdiabatic controls. For critical systems crossing a transition to a topological phase, this task becomes even more challenging due to the closing of the gaps in multiple symmetry sectors. This is the case of the Kitaev chain, whose transition between the trivial and topological phases involves states belonging to different symmetry sectors. In this work, we apply the recently introduced minimal action shortcut to adiabaticity (MA-STA) to a Kitaev chain and propose a multi-step strategy to obtain the optimal control protocol to drive the system across its different phases. Our results show that high fidelities can be achieved through the adapted MA-STA at time scales much shorter than those of linear ramp protocols. We also compare the performance of both controls in suppressing work fluctuations. These findings may guide the design of STA protocols in many-body systems where competing energy scales and symmetries shape the global dynamics.
  • PDF
    Over the past 25 years, I have been involved in some intriguing developments in the foundations of physics, exploring the quantum reality problem, the relationship between quantum theory and gravity and the interplay between consciousness and physical laws. These investigations make it plausible that we will find physics beyond quantum theory, potentially including both new evolution laws and new types of measurement. There is also a significant chance they could have potentially transformative impact on information processing and on the development of and our future with AI.
  • PDF
    Broad claims about whether adaptivity helps in quantum state tomography can be misleading unless the state family, measurement architecture, and error metric are specified carefully. We study a restricted but physically important regime: single-copy quantum state tomography under local Pauli basis measurements, where the allowed measurement settings are tensor-product measurement operators built from local single-qubit Pauli operators, and performance is measured in trace distance with high probability in a minimax sense over a known structured family. We construct an explicit discrete prefix/tree family of states for which adaptive measurement selection achieves polynomial copy complexity, while every non-adaptive design requires exponentially many copies in the worst case. The adaptive upper bound comes from stagewise prefix recovery using hierarchical breadcrumb information revealed by partial prefix matches. The non-adaptive lower bound is based on a rare-prefix mechanism: every fixed design under-samples some deep prefix subset, and outside that subset the competing hypotheses induce identical one-shot laws, so only an exponentially small fraction of the measurement budget contributes to the KL divergence between the full data distributions. The result isolates a concrete regime in which adaptivity provably changes the sample-complexity scaling under the experimentally common local Pauli measurement architecture.
  • PDF
    Advances in quantum information science (QIS) are providing transformative insights into the complexity of quantum many-body systems, potentially defining new frontiers in nuclear and high-energy physics. This review explores how QIS-derived techniques are fostering new analytic frameworks and algorithms - both classical and quantum - to tackle (some of the) present barriers to discovery in fundamental physics, with applicability to other science domains. We highlight how these techniques are shedding new light on the structure and dynamics of hadrons, nuclei, matter in extreme conditions, and beyond. Importantly, they are expected to play an essential role in the development of large-scale quantum simulations of such systems, particularly in setting the balance among quantum and classical computational resources.
  • PDF
    We report the largest trapped-ion hardware demonstration of lattice protein-folding optimization to date, using bias-field digitized counterdiabatic quantum optimization (BF-DCQO) on a fully connected 64-qubit Barium development system similar to the forthcoming IonQ Tempo line. Six peptide sequences with 14-16 amino-acid residues are encoded using a coarse-grained tetrahedral lattice model, yielding higher-order spin-glass Hamiltonians with long-range interactions involving up to five-body terms and mapped to 46-61 qubits. The resulting instances are demanding for near-term quantum hardware because low-energy configurations must satisfy backbone-geometry constraints while optimizing dense residue-contact interactions. BF-DCQO uses a non-variational bias-feedback mechanism, where low-energy samples from each round define longitudinal fields that guide subsequent quantum evolutions. Across the studied instances, BF-DCQO shifts raw sampled energy distributions toward lower energies than uniform random sampling, with the strongest improvements appearing in residue-contact variables. To preserve this signal, we introduce a consensus-based post-processing pipeline that combines quantum-learned contact information with feasible backbone geometries. The resulting hybrid workflow reaches the classical reference energy in multiple instances and improves over the corresponding random-seeded pipeline. These results show that BF-DCQO can generate structured samples for dense protein-folding Hamiltonians at previously unexplored trapped-ion scales.
  • PDF
    This article presents a brief account of Amir O. Caldeira's contributions to the theory of quantum Brownian motion. Motivated by its importance, we outline the description of Brownian motion in the quantum regime following Caldeira's first works. In this context, we particularly highlight the effect of dissipation on the tunneling rate out of a metastable state. We then journey along the alternative ways to approach quantum Brownian motion developed by Caldeira during his career, which go beyond the so-called Caldeira-Leggett model. We conclude by summarizing some of Caldeira's contributions to contemporary fields such as the theory of quantum decoherence and quantum thermodynamics, that were strongly inspired by his eponymous approach to quantum Brownian motion.
  • PDF
    Quantum simulation is a cornerstone application for quantum computing, yet standard methods face a trade-off between circuit depth and accuracy: Trotterization depth scales with the number of Hamiltonian terms $L$, while sampling-based qDRIFT is restricted to $O(t^2)$ error scaling. Here, We introduce qSHIFT, an adaptive sampling protocol that overcomes these limitations. By adaptively updating sampling distributions, qSHIFT maintains $L$-independent gate complexity while achieving an improved error scaling of $O(t^{1+r})$ for an adjustable parameter $r$. This performance is enabled by a classical subroutine solving $L^r$ linear equations per sampling round. Numerical demonstrations confirm the $O(t^{1+r})$ scaling, showcasing qSHIFT as a resource-efficient framework for high-precision quantum simulation. Furthermore, the protocol's reduced circuit depth enhances its compatibility with physical error mitigation, making it a promising candidate for implementation on near-term quantum devices. In addition to its role as a standalone algorithm, qSHIFT can provide a high-precision foundation for modular quantum frameworks such as qSWIFT or Krylov quantum diagonalization.
  • PDF
    Disordered one-dimensional interacting systems have long been characterized via conventional correlation functions. A complementary quantum-information perspective quantifies the randomness of the unitary ensemble dynamics generated by a quantum system through the frame potential, which serves as a practical diagnostic for quantum algorithmic performance. However, no analytical treatment has yet been achieved for experimentally accessible interacting one-dimensional systems. In this Letter, we derive a closed-form expression for the frame potential of a Tomonaga-Luttinger liquid with quenched Gaussian forward-scattering disorder. Exploiting the exactly quadratic structure of the disorder-averaged Keldysh action, we show that the frame potential decays as a power law at early times and saturates to a late-time plateau controlled by a single coupling parameter. Taking the random field XXZ spin chain as a specific microscopic realization, we show that the strongest randomness is achieved near the Heisenberg ferromagnetic point and can be exponentially enhanced through a multiple-quench protocol. We validate our results across the entire gapless phase, with direct implications for algorithm design in analog quantum simulation platforms.
  • PDF
    We present a criterion that serves as the basis for a polynomial-time algorithm to decide whether a finite set of qudit gates exponentiated by some Hamiltonians is universal. Our approach formulates universality in Lie algebraic terms and applies Borel--de Siebenthal theory with a diagonal generator having incommensurate spectrum. In this framework, nonuniversality is detected by invariant subspaces, equivalently by a graph-connectivity obstruction, while universality is repaired by adding generators that couple disconnected components. We further prove that two generators are sufficient for universal control. Our work reveals a profound link between qudit universality and irreducibility of Lie algebra representations.
  • PDF
    Learning curves are a fundamental primitive in supervised learning, describing how an algorithm's performance improves with more data and providing a quantitative measure of its generalization ability. Formally, a learning curve plots the decay of an algorithm's error for a fixed underlying distribution as a function of the number of training samples. Prior work on revenue-maximizing learning algorithms, starting with the seminal work of Cole and Roughgarden [STOC, 2014], adopts a distribution-free perspective, which parallels the PAC learning framework in learning theory. This approach evaluates performance against the hardest possible sequence of valuation distributions, one for each sample size, effectively defining the upper envelope of learning curves over all possible distributions, thus leading to error bounds that do not capture the shape of the learning curves. In this work we initiate the study of learning curves for revenue maximization and provide a near-complete characterization of their rate of decay in the basic setting of a single item and a single buyer. In the absence of any restriction on the valuation distribution, we show that there exists a Bayes-consistent algorithm, meaning that its learning curve converges to zero for any arbitrary valuation distribution as the number of samples $n \to \infty$. However, this convergence must be arbitrarily slow, even if the optimal revenue is finite. In contrast, if the optimal revenue is achieved by a finite price, then the optimal rate of decay is roughly $1/\sqrt{n}$. Finally, for distributions supported on discrete sets of values, we show that learning curves decay almost exponentially fast, a rate unattainable under the PAC framework.
  • PDF
    The study of Entanglement Asymmetry has emerged in recent years as a powerful tool to characterise the symmetry properties of quantum states in relation to a given charge operator through the lens of entanglement. While extremely powerful and general, the standard definition of asymmetry introduces significant non-Gaussian features in free-fermionic systems, leading to certain analytical limitations. In this work, we introduce an asymmetry measure that remains strictly within the Gaussian manifold and analyse its properties. In particular, we show that it quantifies the minimal distance between a Gaussian state and the manifold of symmetric Gaussian states. We further demonstrate that this measure captures the established dynamical signatures of entanglement asymmetry, such as the Mpemba effect, symmetry restoration, and the lack thereof. The Gaussian structure allows these novel asymmetry measures to be computed exactly using correlation matrix techniques, and to be described asymptotically through the quasiparticle picture. We also comment on the possibility of using charge fluctuations to characterise the asymmetry of a Gaussian state.
  • PDF
    We propose a method to evaluate general thermodynamic fluctuations in open quantum systems, based on performing a two-point measurement scheme on the system using dynamics-dependent thermodynamic observables. Our approach allows one to obtain exact equalities for fluctuations of path-dependent thermodynamic quantities such as work and heat, and to isolate correction factors to Jarzynski's equality, requiring only access to the system degrees of freedom. This framework is flexible and can be applied to the limiting case of closed systems, recovering previous, yet seemingly contradictory, results from the literature. Moreover, the formalism admits a straightforward extension to strongly coupled open quantum systems. We investigate the effect of specific dynamical classes on the fluctuation relations, and show that the pure decoherence case is particularly special, as it deterministically does not contain any heat contribution and thus constitutes a class of open system dynamics for which the Jarzynski equality for work fluctuations is identically true at any coupling strength. Finally, we look explicitly at the shape and size of the correction factors to Jarzynski's equality for a qubit undergoing phase covariant dynamics, both in the weakly-coupled regime and in the deep non-Markovian regime.
  • PDF
    We study differentially private approximation algorithms for positive linear programs (LPs with nonnegative coefficients and variables), focusing on the fundamental families of packing, covering, and mixed packing-covering formulations. We focus on the high-sensitivity, constraint-private regime of Hsu-Roth-Roughgarden-Ullman (ICALP 2014), where neighboring instances may differ by an arbitrary single constraint, so one cannot hope to approximately satisfy every constraint under privacy. We give private solvers that return approximate solutions while violating only a controlled number of constraints. Our algorithms improve the prior instance-dependent guarantees, and also yield new data-independent bounds that depend only on the dimension. Our techniques involve a dense multiplicative weights update method developed from a regularized dual viewpoint, which we analyze in a way that exploits structure specific to positive LPs.
  • PDF
    We present a quantum feature-selection framework based on a higher-order unconstrained binary optimization (HUBO) formulation that explicitly incorporates multivariate dependencies beyond standard quadratic encodings. In contrast to QUBO-based approaches, the proposed model includes one-, two-, and three-body interaction terms derived from mutual-information measures, enabling the objective function to capture feature relevance, pairwise redundancy, and higher-order statistical structure within a unified energy model. To suppress trivial all-selected solutions, we further include structured linear penalties that promote sparsity while preserving informative variables. The resulting HUBO instances are optimized with digitized counterdiabatic quantum optimization on IonQ Forte and compared against noiseless quantum simulation as well as two classical dimensionality-reduction baselines: SelectKBest based on mutual information and principal component analysis (PCA). We evaluate the proposed workflow on two benchmark classification datasets, namely the Gallstone dataset and the Spambase dataset, and analyze both predictive performance and selected-subset structure. The results show good qualitative agreement between hardware executions and noiseless simulations, supporting the feasibility of implementing higher-order feature-selection Hamiltonians on current trapped-ion processors. In addition, the quantum approach yields competitive classification performance while producing compact and informative feature subsets, highlighting the potential of higher-order quantum optimization for machine-learning preprocessing tasks.
  • PDF
    We use a $^{87}\text{Rb}$ atomic vapor, suitable for an optically-pumped magnetometer (OPM) in Earth-field conditions, to study the noise properties of three strategies for generating pulsed optical pumping. We compare a frequency-modulated (FM) laser, amplitude modulation (AM) via an acousto-optic modulator (AOM), and amplitude modulation via a semiconductor optical amplifier (SOA). Pumping the ensemble to operate as a Bell-Bloom OPM, and with an equal degree of spin polarization, the three methods give nearly identical sensitivity, showing that the SOA, despite being an active device, can introduce negligible additional noise. Pumping the ensemble to operate as a free-induction-decay OPM, we observe longer unpumped coherence times with the SOA-AM method than with the FM method. Finally, using the higher power available from the SOA, we demonstrate an environment-limited sensitivity of $80\text{fT}/\sqrt{\text{Hz}}$ at $600\text{Hz}$ and 200fT$200\text{fT}/\sqrt{\text{Hz}}$ at $4\text{kHz}$, one to two orders of magnitude beyond what was achievable with the other pumping methods.
  • PDF
    Imaginarity, stemming from the complex structure of quantum mechanics, has recently emerged as a fundamental resource, yet its dynamical generation remains largely unexplored. In this work, we introduce the notion of imaginarity-generating power (IGP) of unitary dynamics, which quantifies the ability of unitary operations to produce imaginarity from initially real quantum states. To quantify imaginarity, we employ a measure based on the Hilbert--Schmidt norm, which we show to be monotone under real unital operations. Within the framework of dynamical resource theories, we derive an exact expression for the purity-constrained IGP in arbitrary dimensions and show that, for pure real input states, it depends solely on intrinsic and experimentally accessible properties of the unitary. We further analyze its average behavior over ensembles of states with varying purity under both uniform and Hilbert--Schmidt distributions. We prove that it satisfies the essential properties of a valid resource monotone within the dynamical resource theory of imaginarity. We also characterize the unitaries that maximize the IGP and determine the corresponding bounds. Moreover, for Haar-random unitaries, we show that the IGP concentrates near its maximal value in high dimensions with small fluctuations, indicating that typical high-dimensional quantum dynamics are highly effective at generating imaginarity.
  • PDF
    Fault-tolerant quantum computing (FTQC) is emerging as the architectural regime in which practical large-scale quantum workloads will execute. In this setting, however, multiprogramming is no longer a matter of partitioning a flat pool of qubits. Quantum error correction exposes a structured floorplan of data tiles, ancilla tiles, and magic-state service resources, so concurrent execution must account for compact placement, connectivity, routing headroom, and shared support infrastructure. This makes FTQC multiprogramming fundamentally harder than its NISQ counterpart: admission decisions can fragment the remaining floorplan, conservative reservations can waste ancilla, and dynamic contention across data, ancilla, and magic-state resources can degrade both throughput and quality of service. In this work, we develop a formal framework for FTQC multiprogramming that captures these structural constraints and their runtime implications. We formulate the baseline static allocation problem, extend it to limited-resource and online settings through hierarchy-aware scheduling policies, and further generalize it to cultivation-enabled architectures with dynamic magic-state generation. Through simulation on synthetic Clifford+T workloads, the proposed scheduler achieves a normalized system speedup of 3.1x, improving over prior FTQC multiprogramming baselines by ~29% while maintaining low mean slowdown.
  • PDF
    It is a fundamental question in epidemiology to estimate, model and predict the growth rate of a pandemic. Analogously, analysing the diffusion of innovation, (fake) news, memes, and rumours is of key importance in the social sciences. The resulting epidemic growth curves can be classified according to their growth rates. These have been found to range from exponential to both faster super-exponential curves and slower subexponential or polynomial curves. Previous research has lacked a unified explanatory framework capable of accommodating super-exponential, (stretched) exponential, and polynomial growth patterns within the same contact network. In this paper we propose a simple agent-based network model that can capture all these phases. We provide such a framework by modelling how transmission rates depend on spatial distance and on individuals' numbers of contacts. By comparing the growth rate of spreading processes with or without degree-dependent and/or distance-dependent contact rates through data-driven and synthetic simulations on real and modelled networks with underlying geometry, we find evidence that even a 'sublinear presence' of these causes may cause a significant slow down of the growth rate on the same underlying network. We find that the growth rate is governed by a combination of three factors: geometry, the prevalence of weak ties, and superspreaders. We confirm our results with rigorous proofs in a theoretical model, using a spatial multiscale-argument in long-range heterogeneous first passage percolation. Our results give a plausible explanation of why the consecutive waves of a single pandemic can differ in their growth even if their spreading mechanisms are similar.
  • PDF
    In Orabona and Pál [2016], we introduced the shifted KT potentials, to remove the $\ln \ln T$ factor in the parameter-free learning with expert bound. In this short technical note, I show that this is equivalent to changing the prior in the Krichevsky--Trofimov algorithm. Then, I show how to use the same idea to remove the $\ln \ln T$ factor in the data-independent bound for the Squint algorithm.
  • PDF
    Knots and links represent a fundamental motif of non-local connectivity that permeates the physical sciences from string theory to protein folds. While spectral braiding has been explored in two-band non-Hermitian models across various platforms, its direct simulation and characterization on programmable quantum hardware, particularly beyond two strands, remains a formidable challenge due to the limitations of variational optimization in these systems. Here, we introduce a family of non-Hermitian multi-band twister models and implement a non-variational protocol to characterize their complex braided band structures on a programmable superconducting quantum processor. By mapping the winding of eigenstates to the spectral topology, we devise an efficient measurement strategy that extracts braid information, including braid words and knot invariants like the Alexander and Jones polynomials, without requiring full spectral tomography or repeated optimization. We experimentally demonstrate the reconstruction of complicated knots and links such as the Hopf chain and Solomon's knot. Our approach provides a general framework for investigating exotic non-Hermitian topology on near-term quantum devices, opening a route to simulate more sophisticated topological structures in knot theory.
  • PDF
    Moiré superlattices of transition-metal dichalcogenides (TMDs) host strongly interacting Bose-Fermi mixtures in which bosonic excitons coexist with correlated electron lattices. Using ultrafast, time- and energy-resolved photoluminescence (PL) and reflectance microscopy, we show that strong exciton-electron and exciton-exciton repulsion can enable collective ballistic exciton transport in a WSe$_2$/WS$_2$ heterobilayer. The ballistic transport is energy-selective: repulsive interactions drive excitons into a higher moiré exciton band, where enhanced intersite hopping enables rapid spatial expansion. Correspondingly, the exciton mean-squared displacement (MSD) exhibits a quadratic time dependence ($\propto t^2$). This ballistic expansion is enhanced at fractional electron fillings where the electrons form generalized Wigner-crystal (GWC) orders. Afterwards, the system transitions into a mixed electron-exciton Mott state as Auger recombination and density depletion conclude the ballistic expansion. A one-dimensional Bose-Fermi Hubbard model solved using density-matrix renormalization group (DMRG) qualitatively reproduces the measured exciton transport and time-dependent response. It further confirms that strong cross-species interactions allow the electron crystal to perforate the exciton Mott background, accelerating its melting and enhancing exciton motion. Our results establish moiré TMDs as highly tunable platforms for realizing strongly interacting Bose-Fermi mixtures, which we employ here to demonstrate real-time control of intertwined bosonic and electronic order and to establish a route to the exciton insulator-fluid transition.
  • PDF
    Quantum state discrimination is a fundamental information processing task that serves as a building block for numerous applications and provides implications at the foundational level. In this work, we consider minimum error discrimination of multi-copy states, where instead of preparing a single system we assume that multiple instances of the same state are prepared. Now the discrimination allows for measurements from multiple parties with different measurement strategies varying from global measurement strategy to ones restricted to different forms of local operations and classical communication strategies. By comparing the average success probabilities in quantum and classical cases, we find a qubit strategy that outperforms all the bit strategies. However, we find that there are other bit-like operational theories which can outperform the best qubit strategies even with a classical measurement strategy and we are able to identify instances of different theories where different measurement strategies are optimal. In this way, we are able to find instances of nonlocality without entanglement as well as provide general bounds for bit-like operational theories.
  • PDF
    From the perspective of quantum information science, we investigate tree-level Bhabha scattering between an incident electron $A$ and a positron B, where $B$ is initially entangled with a spectator electron $C$, which does not participate in the scattering interaction.We find that the quantum electrodynamics (QED) scattering between $A$ and $B$ can drive the global $ABC$ system into a genuine tripartite entangled (GTE) state. Using four canonical tripartite entanglement metrics, we systematically characterize and quantify the GTE of the composite system, and demonstrate that the scattering momentum of the $A$-$B$ pair and the initial $B$-$C$ entanglement are the key resources governing GTE generation.We further analyze the monogamy of quantum correlations, which imposes fundamental constraints on the shareability of quantum resources in multipartite systems. Specifically, we systematically study the monogamy relations for the squared entanglement of formation and squared quantum discord in our scattering model, and find that monogamy constraints are markedly relaxed in the non-relativistic regime, enabling enhanced shareability of quantum correlations across the three particles. This work uncovers novel quantum correlation properties of fundamental QED scattering processes, and provides direct theoretical guidance for the development of QED-based quantum information processing protocols.
  • PDF
    We consider a many-body Hilbert space with a fixed global charge and show that the typical entanglement entropy of a subsystem, at the leading and subleading order in the thermodynamic limit, can be expressed in terms of a single quantity which represents the local thermal entropy at fixed charge density. We find a general formula which applies both to abelian U(1) symmetry and non-abelian SU(2) symmetry, including the case of a local Hilbert space which transforms under a general reducible representation of the symmetry group. We illustrate the general formula with model systems and discuss the relevance of the results as a probe of quantum chaos for physical Hamiltonians.
  • PDF
    Topological defects play a fundamental role in the investigation of symmetries in quantum field theories. For conformal field theories in two space-time dimensions, it is possible to construct these defects using lattice models allowing ab-initio analytical and numerical computations of their characteristics. In this work, topological defects are investigated in non-unitary conformal field theories using appropriate variations of the restricted solid-on-solid models. The relevant impurity models and the corresponding defect operators are constructed for the lattice system. Numerical computations are performed for the energy spectrum, eigenvalues of the defect operators as well as thermodynamic characteristics and compared with analytical predictions. Finally, renormalization group flows between the different fixed points are analyzed using numerical methods.
  • PDF
    Frontier AI both amplifies existing risks and introduces qualitatively novel challenges. Not only is there a notable lack of stable scientific consensus resulting from the rapid pace of technological change, but emerging frontier AI safety practices are often misaligned with, or may undermine, established risk management frameworks. To address these challenges, we systematically surface open problems in frontier AI risk management. Adopting a problem-oriented approach, we examine each stage of the risk management process - risk planning, identification, analysis, evaluation, and mitigation - through a structured review of the literature, identifying unresolved challenges and the actors best positioned to address them. Recognising that different types of open problems call for different responses, we classify open problems according to whether they reflect (a) a lack of scientific or technical consensus, (b) misalignment with, or challenges to, established risk management frameworks, or (c) shortcomings in implementation despite apparent consensus and alignment. By mapping these open problems and identifying the actors best positioned to address them - including developers, deployers, regulators, standards bodies, researchers, and third-party evaluators - this work aims to clarify where progress is needed to enable robust and meaningful consensus on frontier AI risk management.The paper does not propose specific solutions; instead, it provides a problem-oriented, agenda-setting reference document, complemented by a living online repository, intended to support coordination, reduce duplication, and guide future research and governance efforts.
  • PDF
    Continuous-variable (CV) quantum systems offer a natural framework for continuous optimization through their infinite-dimensional Hilbert spaces. In this paper, we propose the Complex Continuous-Variable Quantum Approximate Optimization Algorithm (CCV-QAOA), a variational framework operating in the complex domain that optimizes over complex decision variables. The method efficiently solves real and complex multivariate optimization problems. To demonstrate its versatility, we apply CCV-QAOA across a broad suite of optimization use cases, including convex quadratic minimization, scaling studies with circuit depth and cutoff dimension, constrained quadratic programs using penalty constructions, and non-convex benchmarks such as the Styblinski-Tang function and complex quartic landscapes.
  • PDF
    We construct and systematically assess four outer-crust equations of state based on relativistic nuclear mass models and a machine-learning mass table. Our aim is to quantify the sensitivity of the equilibrium composition and thermodynamic properties of the outer crust to the underlying nuclear input, and to evaluate how these differences propagate to neutron-star configurations that are particularly sensitive to crustal properties. Equilibrium sequences of nuclei were determined by minimizing the Gibbs free energy per baryon for cold, catalyzed matter in $\beta$-equilibrium. The resulting outer-crust equations of state were then employed in neutron-star structure calculations near the minimum-mass limit, where global observables are especially sensitive to the low-density equation of state. The four nuclear models predict different equilibrium sequences, last bound nuclei, and neutron-drip properties. These differences are confined to the deepest layers of the outer crust, beyond current experimental mass coverage. Nevertheless, they propagate only weakly to crust-dominated neutron-star configurations: the gravitational mass, radius, crustal thickness, and fractional moment of inertia differ by less than one percent among the models considered. Modern nuclear-mass models provide consistent outer-crust equations of state for neutron-star applications. Although the detailed composition near neutron drip remains model dependent, the corresponding uncertainties have only a minor impact on the global properties of crust-dominated neutron stars. Therefore, these outer-crust equations of state provide a robust low-density description for astrophysical modelling and for future extensions toward unified neutron-star equations of state.
  • PDF
    Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance. While existing distillation methods for dLLMs reduce inference steps within a single architecture, none address cross-architecture knowledge transfer, in which the teacher and student differ in architecture, attention mechanism, and tokenizer. We present TIDE, the first framework for cross-architecture dLLM distillation, comprising three modular components: (1) TIDAL, which jointly modulates distillation strength across training progress and diffusion timestep to account for the teacher's noise-dependent reliability; (2) CompDemo, which enriches the teacher's context via complementary mask splitting to improve predictions under heavy masking; and (3) Reverse CALM, a cross-tokenizer objective that inverts chunk-level likelihood matching, yielding bounded gradients and dual-end noise filtering. Distilling 8B dense and 16B MoE teachers into a 0.6B student via two heterogeneous pipelines outperforms the baseline by an average of 1.53 points across eight benchmarks, yielding notable gains in code generation, where HumanEval scores reach 48.78 compared to 32.3 for the AR baseline.
  • PDF
    Many well-known theorems establish sufficient criteria for linearizability of a vector field in terms of the eigenvalues of its linear approximation. By attaching weights to coordinates so that some directions are considered "linear", others "quadratic", and so on, one can define the notion of a weighted linear approximation. It is thus natural to ask when a vector field is "weighted-linearizable". In this paper, we formulate a weighted version of the non-resonance condition appearing in the Poincaré and Sternberg linearization theorems and show that it implies weighted linearizability. Our approach first addresses weighted linearization on the level of formal power series. In doing so, we develop a general framework to make sense of a power series version of Moser's trick, a technique used to prove various normal form results in geometry. This formal Moser trick works over any field of characteristic zero and may be of independent interest.
  • PDF
    We consider a holographic Einstein-Maxwell model in five dimensions with pure gauge and mixed gauge-gravitational Chern-Simons terms to study anomaly-induced transport in the presence of explicit symmetry breaking. We include the full backreaction of the scalar field and gauge fields on the metric and compute the anomalous transport coefficients using Kubo formulae involving charge and energy current correlators. Our findings reveal that, in the presence of explicit symmetry breaking, anomaly-induced transport phenomena can extend beyond anomalous currents and affect the non-anomalous sector as well. The transport coefficients exhibit a clear dependence on the symmetry-breaking mass parameter, highlighting the interplay between quantum anomalies and explicit symmetry breaking in holographic systems.
  • PDF
    Sensing the direction of arrival and polarization of impinging signals is a key prerequisite for beamforming and interference mitigation in modern wireless communication systems. Dynamic metasurface antennas (DMAs) can multiplex direction- and polarization-dependent field information onto a single detector by sequentially switching between programmable configurations. This makes DMAs attractive for joint direction-of-arrival and polarization (DoA-P) estimation with a single radio-frequency chain. Experimental demonstrations have so far relied on random pre-measured configuration sequences because optimizing the configurations requires an accurate forward model of the fabricated DMA. Here, we use an experimentally calibrated model based on multiport-network theory (MNT) to optimize DMA configuration sequences for DoA-P estimation. Our experimentally calibrated MNT model predicts the dual-polarized far-field response of our 96-element DMA for arbitrary admissible configurations, enabling model-based optimization without additional radiation-pattern measurements. We optimize sequences using effective-rank-based surrogate objectives and compare them with random sequences as a function of the sequence length and the noise level. The optimized sequences yield the largest gains in the intermediate-SNR and intermediate-sequence-length regime, where the inverse problem is neither noise-limited nor already solved by random diversity. We also tackle a dual-source scenario involving a jammer and a desired transmitter. Our results illustrate some of the potential in the context of jamming-resilient communications that is unlocked by experimentally calibrated MNT models for fabricated DMAs.
  • PDF
    Orbital energy splittings are important quantum dot parameters for the operation of hole spin qubits. They are known to depend on the lateral confinement of the quantum dots. However, when changing top, plunger gate voltages, which are the typical control parameter for qubit applications, such energy splitting changes are typically negligible, both as measured in experiment and as assumed in effective theories. Here, we study the singlet-triplet (ST) splittings, which depend on the orbital splittings, of a double quantum dot (DQD) in a Ge/SiGe heterostructure using photon-assisted tunneling (PAT) and pulsed-gate spectroscopy. We find that the ST splittings have a surprising, strong dependence on the top gate voltages, leading to anomalous PAT measurements. We combine data from both measurements in a model that well describes the linear gate-voltage dependence of the ST splittings. Finally, we show that the ST splittings of the two dots exhibit similar linear gate-voltage dependences when the device is retuned such that their ratio is significantly different.
  • PDF
    Breakthrough progress in vision-based navigation through unknown environments has been achieved by using multimodal large language models (MLLMs). These models can plan a sequence of motions by evaluating the current view at each time step against the task and goal given to the agent. However, current zero-shot Vision-and-Language Navigation (VLN) agents powered by MLLMs still tend to drift off course, halt prematurely, and achieve low overall success rates. We propose Three-Step Nav to counteract these failures with a three-view protocol: First, "look forward" to extract global landmarks and sketch a coarse plan. Then, "look now" to align the current visual observation with the next sub-goal for fine-grained guidance. Finally, "look backward" audits the entire trajectory to correct accumulated drift before stopping. Requiring no gradient updates or task-specific fine-tuning, our planner drops into existing VLN pipelines with minimal overhead. Three-Step Nav achieves state-of-the-art zero-shot performance on the R2R-CE and RxR-CE dataset. Our code is available at https://github.com/ZoeyZheng0/3-step-Nav.
  • PDF
    We consider series expansions in bases of classical orthogonal polynomials. When such a series solves a linear differential equation with polynomial coefficients, its coefficients satisfy a linear recurrence equation. We interpret this equation as the numerator of a fraction of linear recurrence operators. This interpretation lets us give a simple and unified view of previous algorithms computing these recurrences, with a noncommutative Euclidean algorithm as the algorithmic engine. Finally, we demonstrate the effectiveness of our approach on various examples.
  • PDF
    We introduce ProcFunc, a library for Blender-based procedural 3D generation in Python. ProcFunc provides a library of easy-to-use Python functions, which streamline creating, combining, analyzing, and executing procedural generation code. ProcFunc makes it easy to create large-scale diverse training data, by combinatorial compositions of semantic components. VLMs can use ProcFunc to edit procedural material and geometry code and can create new procedural code with significantly fewer coding errors. Finally, as an example use case, we use ProcFunc to develop a new procedural generator of indoor rooms, which includes a collection of new compositional procedural materials. We demonstrate the detail, runtime efficiency, and diversity of this room generator, as well as its use for 3D synthetic data generation. Please visit https://github.com/princeton-vl/procfunc for source code.
  • PDF
    We introduce Hyper Input Convex Neural Networks (HyCNNs), a novel neural network architecture designed for learning convex functions. HyCNNs combine the principles of Maxout networks with input convex neural networks (ICNNs) to create a neural network that is always convex in the input, theoretically capable of leveraging depth, and performs reliable when trained at scale compared to ICNNs. Concretely, we prove that HyCNNs require exponentially fewer parameters than ICNNs to approximate quadratic functions up to a given precision. Throughout a series of synthetic experiments, we demonstrate that HyCNNs outperform existing ICNNs and MLPs in terms of predictive performance for convex regression and interpolation tasks. We further apply HyCNNs to learn high-dimensional optimal transport maps for synthetic examples and for single-cell RNA sequencing data, where they oftentimes outperform ICNN-based neural optimal transport methods and other baselines across a wide range of settings.
  • PDF
    We develop the Schwinger-Keldysh path-integral formalism for open non-Abelian gauge theories that are gauge-fixed via the BRST method in covariant gauges. We focus on generic initial states, pure and mixed, specified at finite times suitable for non-equilibrium processes. We pay particular attention to the handling of the indefinite Hilbert space, the construction of BRST-invariant Schrodinger picture wavefunctionals, density matrices and inner product, the implementation of the Hata-Kugo prescription, and the role of boundary terms at both the initial and final times. We highlight the advantages of the Nakanishi-Lautrup field representation in dealing with initial/final conditions. The resulting Schwinger-Keldysh path integral is manifestly invariant under a diagonal (retarded) BRST symmetry for arbitrary physical initial states, whether pure or mixed. From this, we obtain the corresponding Ward-Takahashi-Slavnov-Taylor identities, valid perturbatively. Non-perturbatively the Gribov ambiguity is expected to break or modify the BRST symmetry. The naive advanced BRST symmetry is shown to be explicitly violated by the in-in boundary conditions. We show that the Feynman-Vernon influence functional derived by integrating out charged matter and/or hard gluon modes remains (perturbatively) BRST invariant. When the Open EFT action is expanded to second order in advanced fields it exhibits an exact symmetry under a contraction of the original BRST symmetry. This Keldysh BRST symmetry is equivalent to the BRST associated with the retarded gauge transformations together with a linearly realized BRST transformation of the advanced fields. These govern the structure of the leading terms in an Open EFT. We illustrate this with the explicit example of Hard Thermal Loop Effective Theory, and construct the general form of the Open EFT in a Higgs phase when all gauge symmetries are spontaneously broken.
  • PDF
    Small language models (SLMs) offer computational efficiency for scalable deployment, yet they often fall short of the reasoning power exhibited by their larger counterparts (LLMs). To mitigate this gap, current approaches invoke an LLM to generate tokens at points of reasoning divergence, but these external calls introduce substantial latency and costs. Alternatively, standard distillation is often hindered by the capacity limitation, as SLMs struggle to accurately mimic the LLM's complex generative distribution. We address this dilemma by identifying local sufficiency: at divergence points, the LLM's preferred token consistently resides within the SLM's top-K next-token predictions, even when failing to emerge as the SLM top-1 choice. We therefore propose SELECT TO THINK (S2T), which reframes the LLM's role from open-ended generation to selection among the SLM's proposals, simplifying the supervision signal to discrete candidate rankings. Leveraging this, we introduce S2T-LOCAL, which distills the selection logic into the SLM, empowering it to perform autonomous re-ranking without inference-time LLM dependency. Empirically, we demonstrate that a 1.5B SLM's top-8 candidates capture the 32B LLM's choice with 95% hit rate. Translating this potential into performance, S2T-LOCAL improves greedy decoding by 24.1% on average across benchmarks, effectively matching the efficacy of 8-path self-consistency while operating with single-trajectory efficiency.

Recent comments

Nathan Claudet Apr 30 2026 06:07 UTC

Actually, ${|+\rangle}^{\otimes n}$ is a circle graph state, since the graph with $n$ vertices and no edges is a circle graph. The corresponding chord diagram has $n$ non-intersecting chords. Examples of graphs that are not circle graphs are shown in Figure 2 (left).

Frank E. S. Steinhoff Apr 29 2026 23:43 UTC

I see, thank you for the clarification. Would this notion allow for this annoying example:
from any entangled state you can obtain product states via suitable local measurements, and these product states can be mapped by LUs into $|+\rangle^{\otimes n}$, which is, technically, a non-circle graph sta

...(continued)
Nathan Claudet Apr 29 2026 22:25 UTC

It is true that for graph states, the notions of SLOCC-equivalence and LU-equivalence coincide. Thus, Result 1 can be rephrased as follows: "the only graph states that are SLOCC-equivalent to circle graph states are circle graph states themselves". In other words, if a graph state is obtained from a

...(continued)
Frank E. S. Steinhoff Apr 29 2026 20:39 UTC

Since SLOCC-equivalence=LU-equivalence for graph states (Proposition 9 in https://arxiv.org/abs/quant-ph/0602096), isn't the closure of circle graph states under SLOCC settled? Otherwise, there would exist a counter-example to Result 1 and the sentence "The only graph states that are LU-equivalent t

...(continued)
Tim D Zhang Apr 29 2026 15:57 UTC

Haha, “non-conformal distortion” is a nice way to put it. Jokes aside, maybe this is something their academic committee should take a look at.

Siddhant Singh Apr 29 2026 07:50 UTC

Hi, congratulations for the work. I see that earlier work cited on distributed error correction. However, it would be worth also looking into two recent works https://www.nature.com/articles/s41534-025-01146-2 and https://arxiv.org/abs/2601.07241 which provide state of the art protocols for Distribu

...(continued)
Leeseok Kim Apr 29 2026 00:08 UTC

Cool work! I have a quick question about the quadratic fast-forwarding point in Implication 3.

I think the evolution under your Lindbladian can be written as

$e^{T\mathcal L_\delta}(\rho) = \mathbb E_{G\sim N(0,T\delta^2)} \left[e^{-iH(\delta T+G)} \rho e^{iH(\delta T+G)} \right].$

So t

...(continued)
Stephen Jordan Apr 28 2026 21:51 UTC

Thanks! Your BPQM are very interesting points of comparison. It looks like BPQM wins for the sparser instances and FGUM wins for the denser ones.

Kwok Ho Apr 28 2026 18:15 UTC

For me personally, if I open the pdf in the browser, I cannot see figure 1. But if I download it and open it in preview for example/adobe acrobat etc, I can see figure 1.

Jason Chadwick Apr 28 2026 14:22 UTC

Thanks for the comment. Not sure why some figures are missing - they all look fine on our end. Possibly some subtlety of different PDF viewers. We will update the submission to use more robust image formats.

Regarding Figure 10, we were limited by the computational cost of the Monte Carlo simulat

...(continued)