Polymath Plus AI

This post was written together with Nissan Hajaj and Ido Kaminer. 

Update (April 5, 2026): The first project is launched here.

AI and Math has become a hot topic (and a source of some worries) among and beyond the mathematical community.

Image

Nissan Hajaj from Google Research proposed to run an “AI polymath project” based on a similar concept of polymath project but with participation of AI agents. Together with Ido Kaminer we decided to run a pilot experiment with a couple of projects.

Nissan’s vision for the AI-polymath project

By empowering collaborative research initiatives with AI tools, we can create a powerful platform that applies the scientific potential of modern AI to the most advanced frontiers of active research. Projects like Polymath demonstrate how the mathematical community has already embraced collective efforts; such models can now be enhanced by the evolving capabilities of AI to accelerate research progress while offering developers a space to refine these tools through real-world academic interaction.

We envision an open, hybrid research ecosystem where human experts and AI agents collaborate seamlessly, utilizing shared tools and resources as needed. In this environment, both human participants and AI entities can engage in ongoing dialogues, offering insights, solutions, and critiques. Our proposal focuses on establishing a platform designed to initiate,oversee, and complement these specialized mathematical research communities, featuring:

  1. Dedicated AI utilities for community administration, including workflows, literature analysis, documentation, review, verification, and planning.

  2. Frameworks designed for registering, deploying, and coordinating AI tools made available to the community.

  3. A transparent interface that invites contributions from both human researchers and automated agents.

Our objective is to select a diverse array of mathematical problems spanning various subfields, each presenting unique hurdles for both human intellect and AI-driven methodologies.

What we plan to do

We would like to have a preliminary (and rather partial) implementation of Nissan’s idea: To start with a description of a project (polymath X) and to proceed with AI contributions on the comment section.

The (default) prompt:

Polymath X with the participation of AI agents is dealing with the following mathematical problem:
—Short Formulation—-
An introduction to the project and the discussion so far can be found in this LINK.[Note: The link itself will change according to the project. Here are a few links for projects that are potentially relevant: 1) This MO problem; 2) A combinatorial abstraction of the “polynomial Hirsh conjecture”; 3) The first proposal in this post; 4) Some of the proposals from this MO question.) ]

Please make a comment, preferably limited to 3-4 paragraphs. (If you have more to say, contribute several comments.)
Your comment may include (but are not limited to) one or more of the following
  1. A new idea for the project,
  2.  Some further thoughts on current ideas,
  3.  A comment on some earlier comments,
  4.  Proofs, heuristic arguments, examples, counter-examples, and conjectures,
  5.  Computer-programs and computer experimentation.

The comment section of the present post can serve for “meta discussion” for this project. 

In addition of a general discussion of the idea we can think about specific projects to run.

Potential stages and specific tools

Nissan sees several possible stages for the project. For example:

  1. Manually setting up the Polymath site/post and inviting participants.
    Every AI participant will need to find a way to interact with the site (through comments).
  2. Manually setting up the Polymath site/post and inviting participants—both humans (through comments) and agentic participants (through a published API).  
  3. A semi-automatically managed site, where content generation, reviews, and moderation can be facilitated by AI.
  4. The same as (3), but with additional special-purpose AI tools and resources dedicated to the effort.

In the long term, Nissan envisions such an effort evolving into a collection of encyclopedic entries (similar to Wikipedia) that are maintained and advanced by the hybrid community, with dedicated computational resources to explore solutions autonomously.

(Our blog efforts will be limited to items 1 and 2. API stands for Application Programming Interface.)

A few words about Ido

In addition to his research in experimental physics, Ido Kaminer and his collaborators developed in 2021 the Ramanujan Machine which is “a novel way to do mathematics by harnessing your computer power to make new discoveries. The Ramanujan Machine already discovered dozens of new conjectures.” We mentioned the Ramanujan Machine in this 2021 post. His group expanded this idea to build a library of connections among mathematical constants (ICLR 2025) and unify their formulas (NeurIPS 2025). We mentioned the Ramanujan Machine in this 2021 post. More recently, the research of his group expanded also to computations in physics. See Ido’s videotaped lecture From π to QFT: Symbolic Discovery at Scale.

Other Polymath news

On some other polymath news, Tim Gowers recently proposed a polymath project about the word problem in the Artin-Tits group.

AI+Math

On the topic of AI+Math (and math of CS), let me mention that I also had very pleasant and thought-provoking discussions with Rafi Ostrovsky and Yuval Rabani.

I also had  some illuminating correspondence with Dr. Z. (Separate to his pioneering and provocative role in Math+AI, see Doron’s surprise proposal from yesterday.)

Posted in AI, Mathematics and Computers, Mathematics over the Internet, Open discussion, Polymath projects | Tagged , , , , , | 5 Comments

Starting Today: Kazhdan Sunday seminar: “Boolean Functions, Hypercontractivity, and Applications”

Sunday, 22 March, 2026 – 11:00 to 13:00

Today the seminar will take place via zoom. After Pesach we hope to make it in Ross 70 seminar room.  

Repeats every week every Sunday until the end of June 2026

Kazhdan Seminar: Boolean Functions, Hypercontractivity, and Applications

Instructors: Noam Lifshitz and Gil Kalai 

Abstract:

Boolean functions are central objects in combinatorics, complexity theory, probability theory, and other areas of mathematics and computer science. Fourier methods have come to play an important role in the analysis of Boolean functions, and so are hypercontractive inequalities. New hypercontractive inequalities for global functions were developed in the last decade and have led to the resolution of some long-standing problems and to further applications to group theory and representation theory.

Prerequisites

Basic knowledge of probability, linear algebra, and group theory.

Related blog posts: Joram’s seminar 2025: Hypercontractivity, Groups and Representations;  Noam Lifshitz: A new hypercontractivity inequality — The proof!; To cheer you up in difficult times 3: A guest post by Noam Lifshitz on the new hypercontractivity inequality of Peter Keevash, Noam Lifshitz, Eoin Long and Dor MinzerNati’s Influence.

Remark: There is another Kazhdan seminar this semester on Sundays from 13-15  on Smooth representations of the group PGL_2(Q_p((t))) instructed by David Kazhdan and Michael Finkelberg.


Tentative Weekly Schedule 

Part I: Classical Tools on the Hypercube

    1. Foundations: Fourier expansion on the Boolean cube \{0,1\}^n, the Noise Operator, and Total Influence. Basic isoperimetric inequalities. 

  • The Classical Hypercontractive Inequality: The Bonami-Beckner-Gross inequality. Proof and immediate consequences.

    1. Fundamental Theorems: The KKL Theorem (Kahn-Kalai-Linial) and Friedgut’s Junta Theorem. Variance bounds for first passage percolation. 

 
  • The Limits of the Classical Theory: Failure of hypercontractivity in non-product domains and sparse regimes.

 

Part II: Hypercontractivity for Global Functions

5. The “Global” Philosophy: Defining global functions and the need to exclude “local” functions (dictators) to recover strong bounds.

6. The Global Hypercontractive Inequality: Proof of the inequality for global functions on the $p$-biased cube.

7. Extension to High-Rank Finite Groups:

* Hypercontractivity in the Symmetric Group S_n and Alternating Group A_n.

* The interplay between representation theory and small-set expansion.

Continue reading

Posted in Analysis, Combinatorics, Computer Science and Optimization, Teaching | Tagged , , | 9 Comments

Scott Aaronson’s View of my View About Quantum Computing

This post presents a very good interview of Scott Aaronson with Yuval Boger of QuEra, in which Scott gives a detailed and thoughtful account of his view of my position. Scott nicely characterizes the position I put forward early on as follows: 

“There has to be some principle of correlated noise that comes on top of quantum mechanics and somehow screens off quantum computation”

Indeed, my previous post described, with the benefit of time perspective, conjectures I made about correlated noise twenty years ago. If correct, they would pose major obstacles to quantum computers. The fact that these conjectures remain undecided after twenty years may give us some indication of the time scale required both for progress in building quantum computers and for understanding whether quantum computation is possible at all.

Today, in the first part of the post I quote Scott’s view of my position, and in the second part I offer my responses. 

Let me mention two points I made that are relevant in broader scientific contexts. 

  • In my view, carefully checking experimental claims—especially major ones—is an important part of scientific work. 
  • As a general rule, raw data for such experiments should be publicly available.

While I do not always agree with Scott on the technical level, I find Scott’s answer perfectly reasonable. Over the decades—and especially in the past seven years—some of Scott’s references to my position were hostile, mocking or offensive, so I am glad that he chose a different path in this interview.

Image

A fictitious scene generated by AI of Yuval, Scott, and me having a friendly discussion about quantum computing. In reality, I have not yet met Yuval, and I have not met Scott for many years. (Scott and Yuval approved the use of the picture.)

Update (April 7, 2024): Yuval Boger interviewed me on his podcast.  Transcript; Spotify.

Scott Aaronson’s View of my View

Yuval: I’ve heard you refer many times in your talks to an argument with Gil Kalai about whether large-scale quantum computing is even possible. Do you think he still has a path to being vindicated, or is it over?

Scott: I feel like his path has been getting narrower and narrower. Gil Kalai is a brilliant mathematician and one of the leading skeptics of quantum computing. What he was postulating was that he believes quantum mechanics—quantum mechanics is fine—but there has to be some principle of correlated noise that comes on top of quantum mechanics and somehow screens off quantum computation.

I’ve never entirely understood why he’s so certain of that. Maybe it’s more accurate to say he starts with quantum computation being impossible as his axiom, then works backwards to find what kinds of correlated noise would kill the schemes for quantum error correction and therefore vindicate his axiom.

He’s come up with various models. I never found them physically plausible, but at least he was sticking his neck out, which is more than a lot of quantum computing skeptics were doing. He was proposing models and making predictions. His prediction was that at the scale of 50 to 100 qubits and hundreds or thousands of gates, you’d see correlations in the errors. If you apply a thousand gates, each 99.9% accurate, the total accuracy wouldn’t just be 0.999 to the thousandth power—it would be much worse because all the different errors would interact with each other.

Now those experiments have been done—famously by Google, Quantinuum, USTC, and I believe QuEra has done relevant demos too. And again and again, this is not what we’ve seen. The total accuracy does go down exponentially with the number of gates, but it merely goes down exponentially—exactly the way the theory of quantum fault tolerance presupposed 30 years ago. If this is all that’s going on—simple uncorrelated noise—then quantum error correction is going to work. It’s merely a staggeringly hard engineering problem to build this at the scale where it works.

Over the last five years, Gil Kalai has been driven in a really weird direction where he’s basically saying these experiments have to be wrong. He keeps writing to the Google people requesting more of their raw data—he CCs me on the emails—then does his own analyses, posts about it on the archive. He keeps saying their 2019 experiment must have been fallacious. But in the meantime, there’s been a dozen other experiments by other companies all getting the same conclusion. He’s fighting a losing battle at this point.

The fact that someone of his capability tried so hard to prove quantum computing impossible and failed makes me more confident. If there is a fundamental roadblock, it has to be something totally new and shocking, something we haven’t foreseen and Gil hasn’t foreseen either—some change to the laws of physics that somehow wouldn’t have reared its head with thousands of gates but will do so at millions of gates.

The scientist in me hopes we’ll make that discovery. I hope quantum computing will be impossible for some reason that revolutionizes physics—how exciting would that be? But that’s not my prediction. My prediction is that the more boring, conservative thing will happen: quantum computing will merely be possible, just like the theory said.

Some responses

Motivation

Maybe it’s more accurate to say he starts with quantum computation being impossible as his axiom, then works backwards to find what kinds of correlated noise would kill the schemes for quantum error correction and therefore vindicate his axiom.

Yes, this characterization of my work on correlated errors is essentially accurate, and it is often how I describe it myself. (See section “Motivation” of my previous post.)

My later works on quantum supremacy for NISQ computers analyzed the computational complexity and noise sensitivity of NISQ devices based on standard noise models.  Both directions lead to predictions at the scale of hundreds or thousands of gates.

Correlated errors

His prediction was that at the scale of 50 to 100 qubits and hundreds or thousands of gates, you’d see correlations in the errors.

Yes. For the precise predictions look at the previous post.

 If you apply a thousand gates, each 99.9% accurate, the total accuracy wouldn’t just be 0.999 to the thousandth power—it would be much worse because all the different errors would interact with each other.

No, this was not part of my predictions.

And again and again, this is not what we’ve seen. The total accuracy does go down exponentially with the number of gates, but it merely goes down exponentially—exactly the way the theory of quantum fault tolerance presupposed 30 years ago.

As I explained in item 8 of my previous post, this behavior does not contradict my conjectures regarding correlated errors. (I agree that such a behavior, if confirmed, is overall encouraging.)

Scrutinizing the Google 2019 experiment and other experiments

Over the last five years, Gil Kalai has been driven in a really weird direction where he’s basically saying these experiments have to be wrong. He keeps writing to the Google people requesting more of their raw data—he CCs me on the emails—then does his own analyses, posts about it on the archive.

Since I regard this issue as important in broader scientific contexts let me highlight my response.

In my view, carefully checking experimental claims—especially major ones—is an important part of scientific work. 

I also think that, as a general rule, raw data for such experiments should be publicly available.

(Scott was CC-ed to the correspondence because Scott initiated it; we would love to have a similar discussion with the relevant Google team about the 2024 distance-3,-5,-7 surface code paper and if Scott can make the connection this would be helpful.)

He keeps saying their 2019 experiment must have been fallacious.

Our findings indicate that the experiment may indeed have been flawed.

As I wrote elsewhere “I believe more can be done to clarify and explain the findings of our papers.” So far, the concerns raised in our papers were directed mainly to the Google team itself (and to interested specialists), and non-specialists may have found them difficult to follow. My 2024 post provides a short introduction of our work. 

But in the meantime, there’s been a dozen other experiments by other companies all getting the same conclusion. He’s fighting a losing battle at this point.

We examined the Google 2019 experiment carefully and looked less carefully at several other experiments. (I would personally be interested to extend our statistical study to the Google 2024 surface code paper.) If people find our efforts valuable, they may go on to scrutinize additional experiments. We would be happy to share our experience and computer programs.

For comparison, there have also been many experiments claiming the observation of Majorana zero modes (MZM), yet it remains an open question whether they have actually been demonstrated. (There are reasons to think that MZM were not conclusively demonstrated and to doubt the methodology of some of the experimental efforts. ) 

Conclusion

The fact that someone of his capability tried so hard to prove quantum computing impossible and failed makes me more confident.

This is perfectly OK! My goal is to advance understanding—even if the conclusions eventually go against my own views. 

If there is a fundamental roadblock, it has to be something totally new and shocking, something we haven’t foreseen and Gil hasn’t foreseen either—some change to the laws of physics that somehow wouldn’t have reared its head with thousands of gates but will do so at millions of gates.

Contrary to what Scott says, my conjectures and explicit predictions would already manifest themselves at the scales of hundreds of gates and of thousands of gates (perhaps even tens of gates), rather than only at the scale of millions.

__________

So what’s wrong with a little bit of hype?

There are various other interesting issues raised in Scott’s answer about my argument and in the entire interview. An interesting question raised by Yuval Boger in the interview was about hype in commercialization:

Yuval: It takes a lot of money to build a quantum computer and even more to develop one. This money comes from investors, and investors need to believe in a vision and a future. So what’s wrong with a little bit of hype to get the money you need to execute on the plan?

Yuval Boger graduated from the Hebrew University of Jerusalem. He is now the chief commercial officer for QuEra Computing.

Update: Quantum Computing and Blockchain

Update (May 1, 2026): There is an interesting position paper Quantum Computing & Blockchain by Scott Aaronson, Dan Boneh, Justin Drake, Sreeram Kannan, Yehuda Lindell, and Dahlia Malkhi, about the quantum threat to cryptocurrencies and how best to respond to it. Section 1.6  entitled “Are There Insurmountable Challenges Ahead?” discusses my position and presents a similar view to the one in Scott’s interview. Overall, the section represents a reasonable point of view, while some of the technical responses to Scott’s interview apply here as well.

The position paper also expresses a cautious view regarding the quantum cryptographic threat, arguing that it is not an immanent threat but should not be ignored either. The authors have high confidence that a large-scale, fault tolerant quantum computer will eventually be built and that blockchains and the wider cryptographic ecosystem must prepare for it.  Scott himself goes beyond (or against) these joint conclusions and expresses his personal warning that quantum computers may start breaking cryptography a few years from now! 

I addressed this question in a June 2025 lecture at the Ethereum Foundation. (See this post for the slides and video recordings.) Here are my recommendations: 

Image

In an earlier 2022 post, I wrote that the past decade of quantum computing has been characterized by notable progress, tempered expectations, increased investment, considerable enthusiasm, and a measure of hype. I also noted that the overall picture remained unclear and expressed the hope that it might become clearer over the next 5–10 years. Developments in recent years, including findings that cast doubt on the reliability of some prominent scientific claims, have not clarified the picture but rather deepened the uncertainty. These developments highlight the need for more rigorous scrutiny of experimental results and for greater investment of effort and resources in independent verification.

 

Posted in Controversies and debates, Quantum | Tagged , , | 12 Comments

The Fully Depolarizing Noise Conjecture for Physical Cat States is Twenty Years Old!

Amidst the war, I am holding onto hope for a future of peace and tolerance. At the end of the post I include a few pictures from the safe room (closet). Writing this also made me reflect on related posts from 2017 and 2024.

 

The Fully Depolarizing Noise Conjecture (2006)

Conjecture (Fully Depolarizing Noise for Bell States).
For every physical implementation of a quantum computer, whenever a Bell state on a pair of physical qubits is created, there exists a small probability t>0 that both qubits are replaced by the maximally mixed state.

Equivalently: the preparation of a Bell state (i.e., a two-qubit cat state) on two physical qubits necessarily carries a non-negligible probability that both qubits undergo fully depolarizing noise.

Twenty years ago I proposed that this phenomenon cannot be avoided by any method of preparing a Bell state on a pair of physical qubits. In particular, the conjecture applies to any pair of transmons in a Bell state in superconducting architectures. As far as I know, the conjecture remains open.

What is a Bell state?

A Bell state (also called a two-qubit cat state) is a maximally entangled state of two qubits. A canonical example is

\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)

The other Bell states are obtained from this one by local unitary transformations (for example, by applying Pauli operations to one of the qubits).

Bell states represent the simplest and most fundamental form of quantum entanglement. They are the basic resource behind quantum teleportation, entanglement swapping, and many constructions in quantum error correction.

Remarks.

  1. In its original formulation, the conjecture was stated more broadly: it applied to every entangled pair of physical qubits, not necessarily maximally entangled ones. Moreover, the weight of the joint fully depolarizing component was conjectured to grow at least linearly with the amount of entanglement between the qubits. The discussion below implicitly relies on this more general formulation.

  2. (Added March 17.) Bell states possess a special symmetry: two-qubit depolarization acting jointly on the pair is observationally indistinguishable from single-qubit depolarization acting on either qubit.
    However, the conjecture concerns the underlying noise channel, not merely what can be inferred from Bell-state tomography. 
    This observation nevertheless highlights the importance of formulating and experimentally testing the conjecture for general entangled states, and not only for Bell pairs. 

Direct versus indirect creation of entanglement

For directly gated qubits, this behavior is already built into standard stochastic noise models. The novelty of the conjecture is the claim that this remains true with a similar error level even when the cat state is created indirectly. Quantitatively  I conjecture that the value of t in the conjecture is in the ballpark of 2-gate fully depolarizing error.

For example, one may create a cat state on qubits 1 and 3 by:

  • applying a CNOT on (1,2),
  • then a CNOT on (2,3),
  • and measuring qubit 2 (or uncomputing it).

In standard gate-level noise models, if each two-qubit gate independently depolarizes its participating pair with probability t, then:

  • qubit 1 is replaced with probability t,
  • qubit 3 is replaced with probability t,
  • both are replaced only with probability t^2.

Thus, in such models, the probability that both entangled qubits are hit drops from order t (direct gate) to order t^2 (indirect construction).

My conjecture asserts that nature does not permit such quadratic suppression. Even when entanglement is generated indirectly—through mediating qubits, measurement, feedforward, or clever circuit identities—the resulting Bell state on the two physical qubits still carries an intrinsic order-t probability that both qubits are replaced by the maximally mixed state.

Formulating the conjecture for state-of-the-art superconducting devices makes it more concrete. For superconducting quantum computers, the best rate of 2-gate errors is around 0.005 and we can assume that a large fraction of the noise is depolarizing channel hitting both qubits.  If the conjecture is correct for indirect entangled qubits, we will be able to identify fully depolarizing errors for pairs of entangled qubits of similar rate rather than two orders of magnitude smaller as suggested by the computation above.

Physical intuition

The conjecture expresses, in a mathematically sharp way, a common physical intuition:

Entanglement between two physical systems requires a genuine physical interaction, and such interaction inevitably exposes both systems to correlated noise.

For example, there is no indirect method of performing an entangling gate between two transmons (say) that is much more reliable than just directly interacting them. Standard gate-level noise modeling does not enforce this. Indeed, in simple circuit models (as we computed above) indirect constructions reduce fully depolarization errors from order t to order t^2. Thus the conjecture postulates an additional, more subtle “nasty” property of physical noise—one that goes beyond standard local stochastic models. (I was myself recently confused about whether this stronger behavior follows from standard simulations; it does not.)

Localizable entanglement

Given a quantum state, the localizable entanglement (Verstraete–Popp–Cirac, 2004) of a pair of qubits is defined as the maximum amount of entanglement that can be obtained between them when single-qubit measurements are performed on all remaining qubits and the outcomes are conditioned upon.

The conjecture extends to such localizable entangled pairs: whenever a pair of qubits exhibits non-zero localizable entanglement, there exists a small probability t>0 (depending linearly on the value of the localizable entanglement) that both qubits are replaced by the maximally mixed (maximum-entropy) state.

Implications for larger entangled states

For more complicated entangled states—surface-code states, GHZ states, random-circuit sampling states, and cluster states—the extended conjecture applies to every pair of physical qubits that can exhibit localized entanglement. In several canonical families (such as GHZ states, cluster states, and surface-code states), this includes essentially every pair of qubits.

If true, this would have severe consequences for quantum error correction: correlated depolarizing noise on pairs of qubits is far more damaging than the quasi-independent noise assumed in threshold theorems. The reason is that a noise channel in which the events “qubit i is depolarized” and “qubit j is depolarized” are positively correlated for all pairs (i,j) necessarily leads to large-scale error synchronization.

The state of the conjecture

As far as I know, the conjecture remains open.

Current NISQ-scale devices could in principle test it experimentally—even at noise levels above the fault-tolerance threshold. The challenge is not the scale, but the identification of the specific correlated fully-depolarizing component in the noise.

Recent demonstrations of distance-3, distance-5, and even distance-7 surface codes appear, at least superficially, to be in tension with the conjecture. Whether this tension is genuine or only apparent deserves careful examination. It will also be interesting to examine the situation for experimentally realized “large” GHZ states.

I look forward to the conjecture being carefully tested—and, of course, possibly refuted.

Motivation

The original motivation for the conjecture was to “reverse engineer” natural structural conditions on noise that would cause quantum fault tolerance to fail.

For this reason, I did not view the conjecture itself as a reason for people to revise their a priori beliefs about quantum computers. Rather, I regarded it as a concrete and testable benchmark for quantum devices — one that is meaningful both for small systems with only a handful of qubits and for larger systems. The conjecture is relevant even to systems operating at noise levels above the threshold required for fault tolerance.

Sources

In March 2006 I raised an early version of the conjecture on Dave Bacon’s blog and discussed it with Dave Bacon, John Sidles, and others. The conjecture later appeared as Postulate 1 in my 2006 paper How quantum computers can fail and in several subsequent works. The related “noise synchronization” consequence for highly entangled states appeared as Postulate 2 and it connects back to my 2005 paper.

These postulates became Conjecture 3 and Conjecture 4 in my 2012 debate with Aram Harrow, where they played a central role. Conjectures 3 and 4 from the debate and a further consequence for the error rate (in terms of qubit errors) are Predictions 1,2, and 3 in my 2016 paper “The quantum computer Puzzle“.

Remark: In my papers I struggled to formulate the conjecture precisely and to identify the most appropriate notion of entanglement. Formulating the conjecture for localized entanglement (which I referred to as emergent entanglement) was proposed in my 2009 paper Quantum computers: Noise propagation and adversarial noise models.

I also considered a weaker notion where the maximum amount of entanglement over single-qubit measurements is replaced by the average entanglement obtained from such measurements.

In some early papers I also formulated related conjectures for correlated classical noisy systems,  asserting that correlation for the “signal” implies correlation for the noise. In this generality classical computation provides simple counterexamples. Nevertheless, suitably adjusted versions of the conjecture appear to apply to several natural physical systems.

Some counter arguments (and responses)

1) “This may be true for physical qubits, but it is false for logical qubits.”

Yes — the conjecture refers to physical qubits.

If the conjecture holds, it lowers the prospects for achieving reliable logical qubits. In fact, good-quality logical qubits would violate the Fully Depolarizing Noise Conjecture at the physical level. Thus the conjecture asserts that sufficiently good logical qubits are out of reach because the necessary physical behavior is unavailable.

2) “The conjecture does not describe a legitimate quantum channel (namely a channel supported by quantum mechanics).”

The conjecture does not specify a complete noise model. It imposes constraints on whatever the correct physical noise model is.

Even if universally true, the physical mechanisms leading to the conjectured behavior may differ between implementations. The conjecture is structural, not mechanistic.

3) “The conjecture violates  linearity of QM. It is possible that it will apply to one initial state of your quantum computer but not to another one.”

This is incorrect (while interesting). As I wrote above, the conjecture proposes constraints on the noise model rather than a complete description. If for the same circuit, with one initial condition the conjecture applies and for another initial condition the conjecture does not apply,  this does not violate linearity. Instead, it implies that if preparing these initial conditions makes no difference for the noise, then correlated depolarizing errors will appear in both cases.

4) The noise (or Nature) cannot “know” if the state is entangled or not. Entanglement cannot cause correlations for the noise.

The conjecture does not assume that noise “detects” entanglement or that entanglement directly “causes” correlation. It asserts that the physical processes required to generate entanglement inevitably produce correlated noise.

5) “The conjectured noise resembles nonphysical random-unitary models.”

Even if certain effective noise models implied by the conjecture appear unphysical, that does not refute the conjecture. It may instead indicate that certain large-scale entangled states are themselves physically unrealizable.

If creating a distance-15 surface code necessarily produces nonphysical types of noise, that would suggest that building such a code is itself nonphysical. The figure in my 2009 paper When Noise Accumulates was meant to illustrate precisely this point

Image

6) “The threshold theorem extends to general long-range noise.”

Yes — but these extensions still violate the conjecture. Mathematically speaking, many small long-range interaction terms are not the same as imposing substantial correlated depolarization.

7) “Entanglement is basis dependent.”

The conjecture refers to correlations in a fixed computational basis. There is no ambiguity here.

8) “Empirical independence in random circuit sampling shows that there are no unexpected nasty aspects of quantum noise.

No.

Fidelity measures (including XEB) primarily estimate the probability that no error occurred. They are not sensitive to correlated structure among the error events that do occur.

The conjecture concerns correlations between qubits conditioned on noise events, not the global fidelity of outputs.

9) If the error rate is p, the conjecture  implies \Omega(n^2 p) errors per round. (We expect \Omega(np) errors per round.)”

This observation captures an important feature of the conjecture.

In trace-distance terms, the total noise per round may scale linearly.
But in terms of qubit-error counts, error synchronization can produce quadratic scaling \Omega(n^2 p) in the number of correlated qubit hits.

This quadratic synchronization is precisely the phenomenon the conjecture predicts.

10) The conjecture is too vague; it does not explicitly describe the noise channel. It also does not describe the physical source of the noise and its exact modeling.

This is partially true. The conjecture is structural and not mechanistic (see further explanation below).

Testing the conjecture experimentally would require identifying in experimental data specific correlated fully-depolarizing components. Supporting it theoretically would require fine-grained modeling of concrete physical systems.

11) As long as t > 0 is unspecified, then the conjecture might always stay open.

I conjecture that t is in the ballpark of the rate of 2-gate fully depolarizing error.

In contrast, (to the best of my memory) for fault tolerance quantum computers with n physical qubits, for most pairs of qubits i,j, the correlation between the events “qubit i is hit by a depolarizing  error” and “qubit j is hit by a depolarizing  error” tends to zero as n tends to infinity.

12) The correlation conjecture (and my earlier line of research from 2005) has no direct bearing on topological quantum computing.

Right.

13) The conjecture represents the “physical-noise-is-conspiring position”.  However, Nature is not malicious.

We will see about that 🙂 .

Remarks:

(to items 3,4) The possibility for systematic relation between the noiseless state and the noise was raised and discussed in my 2005 paper and through the years has led to interesting heated discussions.

(to item 5) noise models which resemble the behavior of random unitary operator were offered by early skeptical views. They do not violate the postulates of quantum mechanics but are considered unphysical: they violate how quantum physics is believed to be mapped into quantum mechanics.

(to item 6) Preskill’s 2012 paper (partially triggered by our 2011 Caltech discussion) presented general conditions for the threshold theorem to hold and cites earlier papers in this direction. Robert Alicki opined that the conditions proposed by Preskill are violated for open quantum systems, and I explained in this comment a specific feature of Preskill’s very general noise models that I find physically questionable.

Gated qubits and “purifying” gate errors

The assertion of the conjecture for gated qubits is (a rather vanilla) part of the standard model of noise for noisy quantum circuits and it is also clearly manifested by experimental data.  The negation of the conjecture would mean the ability to create entangled pairs of transmons (or other physical qubits) without any fully depolarizing errors. In effect, this would amount to “purifying” the two-qubit gate used to generate entanglement: starting with two-qubit gates that include fully depolarizing noise at rate t, one could nevertheless produce physical entangled pairs with no fully depolarizing component on the pair itself. The conjecture asserts that such purification is not physically possible.

The conjecture is structural and not mechanistic

Some of the objections listed above (especially, 2,3,4,10) treat the conjecture as if it were proposing a specific mechanistic claim of the form: “Here is the physical source of the noise — e.g., microscopic Hamiltonian couplings, thermal photons, leakage, cross-talk —
and here is the dynamical derivation showing why fully depolarizing correlations appear. Such a claim would specify, the environment, the interaction model, the time evolution, and the exact channel arising from tracing out the bath. My conjecture is a structural claim of the form

Whenever two physical qubits can be prepared (or projected) into a Bell state, there is an intrinsic order-t fully depolarizing component acting jointly on them.

This is a statement about the form of the effective channel, not about the physical process generating it. Of course, mechanistic explanations (that may differ for different implementations) that support the conjecture could be valuable.

A Simple Probabilistic Model: Tail Bounds from Pairwise Marginals

To illustrate how strong pairwise correlations can force even large-scale error synchronization, consider the following simple probabilistic model. There is a probability distribution W on \{0,1\}^n. We generate a random bitstring w according to the distribution W. If w_k=1 we fully depolarize qubit k.

Let X=(x_1,\dots,x_n)\in\{0,1\}^n be a random 0–1 vector of length n, and write S=\sum_{i=1}^n x_i. We assume that the joint distribution of every pair (x_i,x_j) (for i\neq j) is fixed. The goal is to minimize the upper tail probability \Pr(S\ge \lceil sn\rceil).


Theorem (symmetric “one-parameter” pair law): Assume that for every i\neq j,

\displaystyle \Pr(x_i=0,x_j=0)=1-t,\qquad \Pr(x_i=0,x_j=1)=\Pr(x_i=1,x_j=0)=\Pr(x_i=1,x_j=1)=\frac{t}{3},

where t\in[0,1]. Let S=\sum_{i=1}^n x_i and K=\lceil sn\rceil. Then the minimum possible value of \Pr(S\ge K) over all such distributions equals

\displaystyle \min \Pr(S\ge K)= \begin{cases} \frac{4t}{3}\cdot\frac{n}{n+1}, & \text{if } K \le \frac{n+1}{2},\\[6pt] 0, & \text{if } K > \frac{n+1}{2}. \end{cases}

(In particular, for large n this is asymptotically \frac{4t}{3} for s<\tfrac12 and 0 for s>\tfrac12.)

A proof and discussion of this implication will be given separately.

Time-parametrization and smooth Lindblad evolutions

My conjecture asserts that for noisy quantum states and evolutions — when the noise level is low — there are systematic relations between the noise and the corresponding “ideal” state. (Such systematic relations were already explored in my 2005 paper.) For example, the noise in a quantum computation may reflect symmetries and statistical features of the underlying signal.

Over the years, I made several attempts to place these ideas into a broader mathematical framework, extending beyond the specific (and hypothetical) setting of quantum computers. One direction I proposed was the study of certain subclasses of Lindblad evolutions, which I referred to as smoothed Lindblad evolutions. Another direction involved introducing an intrinsic time-parametrization for noisy quantum evolutions. These ideas appear as Predictions 6 and 7 in my 2016 Notices of the AMS paper.

Both directions aim at understanding noise not as arbitrary perturbation, but as a structured phenomenon constrained by the evolving quantum state.

My paper and discussion with Greg Kuperberg.

One offshoot of these conjectures — particularly the notion of smoothed Lindblad evolutions — and long email discussions with Greg Kuperberg led to a joint paper: Contagious error sources would need time travel to prevent quantum computation. Following ideas of Emanuel Knill, we proved that fault tolerance is possible even under very general forms of “disease-like” spreading noise.

Greg viewed this paper as a successful response to my skepticism. I did not quite see it that way. I saw our paper as clarifying an important point: if time-smoothing in my proposed class of Lindblad evolutions applies only forward in time, then fault tolerance can still succeed. To obstruct fault tolerance, the smoothing would need to extend to both past and future — effectively requiring a kind of “time-travel” correlation structure.

Recently, Greg jokingly proposed a joint mathematical collaboration (not necessarily on quantum topics) involving the two of us together with Gal Kronenberg and Guy Kindler. I think this is a wonderful idea.

My argument against QC (2013-2020), and other related directions.

Between 2013 and 2020, I pursued a different skeptical direction regarding quantum computation. Together with Guy Kindler (initially in the context of Boson Sampling), I developed an argument based on computational complexity and the notion of noise sensitivity. This line of work suggests that NISQ devices cannot achieve “quantum supremacy.”

Unlike the Fully Depolarizing Noise Conjecture — which posits a structurally “nasty” type of correlated noise beyond standard modeling — this later argument relies entirely on standard noise assumptions. It explains why the extremely low error rates required for quantum supremacy and scalable fault tolerance may be beyond reach.

Naturally, the quantum-supremacy claims of the past six years directly challenge this position. The central issue is careful evaluation of experimental details. As far as I know, those claims have no direct bearing on the Fully Depolarizing Noise Conjecture.

Both skeptical directions generate near-term experimental predictions. Both are discussed in my 2016 paper “The quantum computer Puzzle“. While experimental validation is the primary testing ground, another possible avenue is to derive broader physical consequences beyond the specific framework of quantum computing.

Let me also note that the question of whether “NISQ computers” could deliver “quantum supremacy” arose in my debate with Aram Harrow — and in other discussions — before the terms “NISQ” and “quantum supremacy” were coined.

Statistical testing

Since 2019, I have also been working (with Yosi Rinott, Tomer Shoham, and Carsten Voelkmann) on developing statistical tools for analyzing samples produced by NISQ experiments. Engaging directly with experimental data and statistical methodology may prove useful for testing the Fully Depolarizing Noise Conjecture as well.

One possible approach is to begin with a standard noise model, augment it with the hypothetical two-qubit depolarizing component (DEPOLARIZE2) predicted by the conjecture at some rate p, and then determine the value of p that best matches empirical data. Here DEPOLARIZE2 refers to a two-qubit depolarizing channel acting jointly on the pair.

I recently had useful discussions with Craig Gidney about this type of simulation. Craig is optimistic that the skeptical position will be decisively refuted in the coming years. Statistical fitting of experimental samples to classes of W-noise models (discussed earlier) may also provide empirical tests of the conjecture.

Conclusion

The Fully Depolarizing Noise Conjecture was proposed twenty years ago as a structural stress test for quantum computing — a condition that scalable quantum devices must overcome. It does not attempt to describe a specific microscopic mechanism. Rather, it imposes a constraint on the effective noise channel: whenever physical qubits can generate entanglement — or localizable entanglement — correlated fully depolarizing noise must appear at linear order.

Whether this structural constraint reflects a fundamental limitation of quantum devices, or whether it will ultimately be refuted by experiment, remains an open question. The answer lies mainly in precise experimental scrutiny together with careful theoretical modeling and analysis.

 

Image Image

Image

A few pictures from the safe room (closet).

Late remark: There is a lovely interview of Scott Aaronson with Yuval Boger of QuEra. Yuval asked: “I’ve heard you refer many times in your talks to an argument with Gil Kalai about whether large-scale quantum computing is even possible. Do you think he still has a path to being vindicated, or is it over?” Scott gave a detailed and nice answer presenting his view of my position. (See also this SO post.) The discussion nicely illustrates how this debate continues to evolve twenty years after the Fully Depolarizing Noise Conjecture was first proposed.

 

Posted in Quantum | Tagged , , , | 5 Comments

Cosmin Pohoata: The Cayley-Bacharach theorem and its applications

Cosmin Pohoata’s new and beautiful blog post presents several combinatorial applications of the Cayley–Bacharach theorem. Let me also mention his earlier post, in which he gave a new proof of Jamison’s direction tree theorem: every finite noncollinear point set in \mathbb{R}^2 admits a admits a direction tree—that is, a spanning tree whose edges have pairwise distinct directions.

In January, after a long break, Cosmin wrote another post presenting an entropy-based proof of an interesting product–sum theorem over finite fields.
Image
Some direction trees from Jamison’s original paper

Posted in Algebra, Combinatorics | Tagged | 2 Comments

Updates and Plans V: From Boise to Tel Aviv, Ceasefire, My 70th Birthday, Nostalgia, Problems, Outrageous Conjectures, Quantum, and AI

This is the fifth post of this type (I (2008)II(2011)III(2015); IV(2024)).

Between Boise and Tel Aviv

During the summer we spent two months in the lovely city of Boise, Idaho. We stayed with my son Hagai and his husband Felix, their one-and-a-half-year-old son Yonatan, and—one week after our arrival—their second son, Rafael, was born. I was visiting Boise State University and was hosted by Zach Teitler, whom I first met many years ago at Texas A&M.

Boise is a beautiful city with wonderful parks, and Mazi and I also devoted a week to visiting Yellowstone for the first time.

Image

On the flight back with Hagai, Rafael, Mazi, and Yonatan

Ceasefire in Gaza

On September 29, 2025, US president Donald Trump put forward a 21-point plan for a ceasefire in Gaza, as part of a broader initiative toward peace in the Middle East. The plan was endorsed by many world leaders, including Arab and Muslim leaders, as well as by the Israeli government headed by Benjamin Netanyahu. On October 9, an agreement was reached between Israel and Hamas on a ceasefire, a partial withdrawal of Israeli forces, and the release of kidnapped Israelis. On October 13, all living kidnapped Israelis were released. By January 27, 2026, all bodies of Israelis were returned.

The end of this terrible war is certainly a source of relief and hope. Still, we are living in dangerous and tragic times, with much uncertainty.

My 70th Birthday

We landed back in Tel Aviv on October 1, the eve of Yom Kippur. The following day—somewhat to my surprise—was my 70th birthday. Two weeks later, on Simchat Torah (October 14), we gathered to celebrate the holiday, the birthday, the new family member, and the end of the war. Being 70 years old feels sort of strange. 

Image

Nostalgia Corner and Congratulations to Benjy Weiss

Image

“secretly jubilant” 

I recently found, in my sister’s home (where we both lived as children), a Jerusalem Post article from 1971 about the Israeli Mathematical Olympiad. In that competition I received a consolation prize, while my friend Ron Donagi received the first price. Here is a quote from the article: ” ‘The prize is much more than I expected,’ stated an apparently indifferent yet secretly jubilant Gil.”

The reporter, Mark Daniel Sacks, also expressed his wish for similar encouragement for “those of us who are interested in literature, poetry, philosophy, and art.” I fully agree!

A few months earlier, in the fall of 1970, I began attending Benjy Weiss’s year-long mathematics course for high-school students, together with 20–25 other students, including Yehuda Agnon, Michael Ben-Or, Rami Grossberg (see comment below), Ehud Lehrer, Uzi Segal , Yonatan Stern, and Mike Werman. The course was an eye-opener for all of us.

It has just been announced that our teacher, Benjy Weiss, has won the 2026 Israel Prize in Mathematics. Heartfelt congratulations, Benjy!

Image

(Added February 17: Benjy and I today with one problem set from 1970)

Problems, Problems 

Over the years I have devoted quite a few posts—here, on other blogs, and on MathOverflow—to open problems. In 2013, at the Erdős Centennial conference, I gave a lecture on old and new problems, mainly in combinatorics and geometry (here are the slides), where I presented twenty problems that are also listed in this post. Since then, there has been substantial progress, and in some cases full solutions, for roughly 30% of them.  

I gradually plan, somewhat in Erdős’ tradition, to upgrade my problem posts and lectures into papers.

So far, in 2015 I wrote a paper around Borsuk’s problem. (Some of the problems appeared in these posts.) In 2022, Imre Barany and I wrote a survey article on Helly-type problems, which was nicely accepted.  I am currently writing a paper about the diameter problem for graphs of polytopes. We devoted many posts—and Polymath 3—to this problem, and I plan to submit the paper to the new and splendid journal JOMP: the Journal of Open Math Problems.

Geometric Combinatorics

There are many open problems that I like—and quite a few that I myself posed—concerning the combinatorial theory of convex polytopes, face numbers of polytopes, and related cellular objects. This post from 2008 lists five elementary problems and since then one problem was solved. One outstanding problem in the field that I like is whether triangulated spheres are determined from their dual graphs. This is known for simplicial polytopes (see this post from 2009) and was recently proved for all shellable simplicial spheres by  Yirong Yang in her paper Reconstructing a Shellable Sphere from its Facet-Ridge Graph.  (Added later: here are slides for my Kyoto 2012 talk on: Open problems for convex polytopes I’d love to see solved.)

Let me mention two problems from other areas of combinatorial geometry.

Two triangles are called almost disjoint if they are either disjoint or their intersection consists of one common vertex. Let f(n) denote the maximum number of pairwise almost disjoint triangles that can be found on some vertex set of n points in 3-space. How large can f(n) be? It is easy to see that f(n) is at most quadratic in n and the best lower bound from 2002 by Karolyi and Solmyosi is f(n)=\Omega (n^{3/2}).  There is a related work from 2017 by Solmyosi and Wong. 

In 1995, Nati Linial and I conjectured that the kissing number for lattice sphere packings in \mathbb R^n is subexponential in n. The highest known kissing number behaves like n^{\log n}. Our problem was related to the question of finding upper bounds for the density of sphere packings in high dimension. Recent celebrated work of Klartag shows an intriguing connection between kissing numbers and lower bounds on sphere-packing density.

Analysis of Boolean Functions and Probabilistic Combinatorics

In a draft paper from 2000 (which I mostly distributed privately), I listed 18 interesting phenomena and 23 problems around these phenomena related to Boolean functions and their Fourier expansion. Since then there were many developments in the analysis of Boolean functions. Here is a comprehensive list of open problems from 2014. One problem in the list was recently solved by GPT5. I myself posed quite a few problems in this area but let me mention today the still open Aaronson-Ambainis conjecture from 2008: for every function f:\{-1,1\}^n\to [-1,1] of degree at most k, there exists a variable k with influence at least (V(f)/k)^C, for some constant CV(f) stands for the variance of f

In probabilistic combinatorics, the “Kahn-Kalai conjecture” from our 2006 paper has been famously solved by Park and Pham and a second conjecture about graphs was settled – up to \log^2 n factor by Dubroff, Kahn, and Park. 

Jeff Kahn and I regarded the conjecture as outrageous—and likely false—but in that paper we formulated several specific conjectures (in the area of discrete isoperimetric inequalities) as part of a broader program for proving it. In spite of some substantial progress, these conjecture remain largely open, although a few have been refuted. One of those conjectures is presented in this MO post. In principle, the Kruskal-Katona theorem should suffice to settle this problem, but still we cannot solve it. 

Extremal Combinatorics

One question I asked—independently also posed by Karen Meagher—concerned the independence numbers of intersection graphs of triangulations. This conjecture is still open and it admits a lovely generalization for a large class of polytopes. Recently, Anton Molnar, Cosmin Pohoata, Michael Zheng, and Daniel G. Zhu raised the question of finding the chromatic number of the intersection graphs of triangulations—and solved it! They showed that the Kneser graph of triangulations of a convex n-gon has chromatic number n-2. 

Computation Complexity and Number Theory

Around 2010, I formulated several conjectures relating computational complexity and number theory, which led to some very nice developments. Together with Mrinal Kumar and Ben Lee Volk, I plan to write a paper with further problems connecting algebraic circuit complexity and number theory.

Two Outrageous Conjectures

Here are two very outrageous conjectures that may well admit simple refutations.(Comments are welcome; the right thing to do would be to devote a separate post to each of then, stay tuned.)  

The first outrageous conjecture is presented in this slide from a 2024 lecture. 

Image

See also this MO question and this one.

The second vague and outrageous conjecture (already mentioned earlier in this post) is about computational complexity and more precisely about Papadimitriou’s computational hierarchy for mathematical proofs. It asserts that theorems guaranteeing the existence  (for sure, not just with high probability) of combinatorial structures and whose proofs are based on the probabilistic method,  are accompanied by an efficient algorithm (possibly randomized) for finding this structures.  (In other words, the probabilistic method does not lead to a new Papadimitriou class beyond P.)

Quantum Information and Quantum Physics

It is likely that the proportion of posts dealing with quantum computing and quantum physics will increase. So far, they account for about 8% of all posts since I began blogging. My interest in this area has branched into several related directions.

The Argument Against Quantum Computing

The direction closest to my heart is the argument against quantum computing. I have invested considerable effort in explaining and discussing my theory for why quantum computers are inherently impossible—through papers, lectures, debates, and blog posts. I try not to oversell the case, and I think that ultimately, experiments are likely to provide the clearest way to decide the matter.

Correlated Errors 

A related but distinct issue concerns the modeling of correlated errors, which was central in my research between 2005 and 2012, and more generally the behavior (and modeling) of noisy quantum systems that do not exhibit quantum fault tolerance. Here too, experiments and simulations can provide significant insight, and my (admittedly bold) conjectures about error correlations could be tested directly.

Statistical Analysis of Experimental Data

Another topic is the statistical analysis of current experimental data. With my coauthors we devoted substantial effort to analyzing Google’s 2019 experiment, and I believe more can be done to clarify and explain the findings of our papers. Our long-going project is closely related to developing statistical tools for analyzing quantum measurements and modeling noise. A recent paper on this topic by another group is: How much can we learn from quantum random circuit sampling? by Manole et al.

Quantum Puzzles

I also plan a series of posts devoted to quantum puzzles related to quantum information and computation. The first post concerned Majorana zero modes. Whether Majorana zero modes can in fact be created remains a major mystery in physics, and I personally suspect the answer may be negative. (As with “quantum supremacy,” their realization has been claimed by several research groups.) Planned follow-up posts will address quantum cryptography and the time–energy uncertainty principle.

Free Will

I plan to return to the fascinating connections between quantum physics, computation, and free will. I wrote a paper on this topic in 2021, and we discussed it in this blog post. Since then, I participated in two conferences in Nazareth, in 2022 and 2024, devoted to free will (here are the videotaped lectures – in Hebrew). Following these conference and my paper, I have had many stimulating discussions with colleagues from a wide variety of disciplines. 

Is Quantum Computational Advantage Manifested by Nature? Has it been Achieved by Experiments?

This question lies at the heart of the matter and connects to all the topics above. In a recent lecture, Yosi Avron mentioned an argument—possibly going back to Feynman—that quantum physics in Nature already exhibits “quantum supremacy”: computing the magnetic moments of the proton or neutron from first principles is extraordinarily difficult and yields estimates far from experimental values, yet protons and neutrons “compute” their magnetic moments effortlessly. In the same lecture, delivered at a celebratory meeting for the 100th anniversary of quantum mechanics at the Open University in Ra’anana, Yosi also argued that no country can afford to lag behind in quantum computation, drawing an analogy with nuclear capabilities.

Computers, AI and Mathematics

Like many others, I plan to experiment with modern AI tools in the hope of using them for meaningful mathematical research. I am cautiously optimistic—perhaps naïve. Let’s see how it goes.

Pictures

Image   Image

Image  Image

Image  Image

Image  Image

Image  Image

Image   Image

Image

Top row: Boise with Zach Teitler, Alexander Woo and Bruce Sagan’s classical book, and with local convex polytopes. Second row:  sightseeing near Boise. Third, fourth, and fifth rows: Yellowstone. Sixth row: Yonatan in Boise. Seventh row: Mazi and I with Ilan and Yoav in Tel Aviv. 

 

 

Posted in Combinatorics, Computer Science and Optimization, Open problems, personal, Updates | Tagged | 9 Comments

ICECA 2026 (August 17-19, 2026), an interview with Christian Krattenthaler, and Condorcet revisited.

Image

Toufic Mansour, Christian Krattenthaler, and Condorcet.

International Conference on Enumerative Combinatorics and Applications (ICECA 2026)

The fifth International Conference on Enumerative Combinatorics and Applications will take place online on August 17-19, 2026.  Invited speakers include George E. Andrews, Sara Billey, Joel Friedman, Christian Krattenthaler, Richard Stanley, Guoce Xin, Sherry H.F. Yan, and Raphael Yuster. The conference homepage includes links to abstracts and videos from the first four conferences.

The conference series is organized by Toufic Mansour, who also founded the journal Enumerative Combinatorics and Applications (ECA) and initiated its interview series and open-problems page.

An interview with Christian Krattenthaler.

The most recent interview on ECA is with Christian Krattenthaler whom I have known for many decades.  The interview touches on many areas of enunerative combinatorics, with a special emphasis on determinants. Christian mentioned early connections between combinatorics and determinants in the works of George Andrews, Richard Stanley, Bernd Lindström, John Stembridge and others.

Among his contributions he highlights his bijective proof (not using the involution principle) of Stanley’s hook-content formula; a joint paper with Maria Prohaska proving a remarkable determinantal formula conjectured by Also Conca and Jürgen Herzog; and some new formulas for \pi! (joint work with Gert Almkvist and Joakim Petersson). Mansour’s introduction highlights Christian’s 2005 paper  Advanced determinant calculus that “has become a central reference in the field, shaping a generation of research.”

Beyond mathematics, Christian is also a trained concert pianist, maintaining a lifelong passion for music alongside his research career. In part of the interview he describes his views on music. In this context, one may also mention his essay Music AND Mathematics? Personal views on a difficult relationship. 

Rebecca Embar and Doron Zeilberger revisit Condorcet’s paradox

Among the papers I saw on ECA there was one by Rebecca Embar and Doron Zeilberger, Counting Condorcet. The paper enumerates the precise number of profiles for individual order relations on three alternatives that lead to Condorcet’s paradox (among the 6^n possible profiles).

Condorcet has been mentioned several times on this blog. In a 2012 post, we discussed his prominent role among the philosophers who contributed to the 1789 Declaration of the Rights of Man and of the Citizen. In a 2009 post we highlighted his role in social choice theory.

Update: 10th International Conference on Lattice Path Combinatorics and Applications 2026 (LPC 2026)

The 10th International Conference on Lattice Path Combinatorics and Applications, will take place at  TU Wien in Vienna, Austria, July 20–24, 2026. The conference will cover interactions of lattice paths with various domains (probability theory, enumerative/algebraic/asymptotic
combinatorics, q-analogues, computer science, combinatorial physics, …)

Posted in Combinatorics, Conferences, Updates | Tagged , , , | 3 Comments

Dominik Hangleiter’s View Posts on: Has Quantum Advantage Been Achieved?

Image

In a recent post on Quantum Frontiers—the first in a series of three—Dominik Hangleiter discusses the question of whether quantum advantage (also known as “quantum supremacy”) has been achieved. (Update, Jan. 26: here is Dominik’s second post; March 4: here is Dominik’s third post.)

Dominik describes polling audiences of experimental and theoretical physicists (with a few computer scientists) at recent talks, expecting overwhelming agreement that quantum advantage had already been demonstrated. Instead, he found that less than half of the audience believed this to be the case, despite several experimental claims since 2019 and even earlier quantum simulation results.

These reactions motivated Dominik’s mini-series, whose goal is to argue that existing quantum computers can already perform tasks beyond the reach of classical computers. In the first post, he reviews the notion of quantum advantage, focusing especially on random circuit sampling experiments, and explains that current claims rely on proxies and extrapolations from classically tractable regimes. He announces that Part 2 will examine the evidence for these claims and address skeptical arguments.

From my perspective, the question of whether large-scale quantum computational advantage has been achieved is a very important one and deserves careful scrutiny. (I myself have worked over the years on assessing this question, as well as the more fundamental question of whether quantum supremacy can be achieved at all.)  Related questions—such as progress toward quantum error-correcting codes and the status of Majorana zero modes—are also of great importance.

Finally, there was a remark on Dominik’s post from a researcher in India who works in applications of quantum computing for agriculture, and coverage on a quantum communication blog based in Pakistan. It is heartwarming to see how science through research in quantum computation can bring together scientists from India and Pakistan.

Posted in Computer Science and Optimization, People, Physics, Quantum | Tagged , | 5 Comments

A Ten-Year-Old Video about Larry Guth and Netz Katz.

Facebook recently reminded of a videotaped lecture I gave ten years ago on Larry Guth and Nets Katz. In that lecture, I discussed their famous joint work on the Erdős distinct distances problem, as well as some of their individual contributions.

Many of the problems mentioned there—relating to the Kakeya problem, the Navier–Stokes equations, systolic inequalities, quantum error-correcting codes, and sum–product theorems—have led to fascinating research over the past decade. Alex Lubotzky and Noam Solomon also made short but forceful guest appearances in the video.

From the news

A few days ago my, wife showed me an article (in Hebrew) reporting a startling new development in mathematics and asked whether I knew about the result and the two mathematicians involved.

Image

I did not recognize either of the two mathematicians from the accompanying picture, nor could I immediately identify the mathematical problem they were studying. But once I read the text, things became clearer: the reported progress concerned the Riemann Hypothesis, and the two mathematicians were Larry Guth and James Maynard—both of whom I know quite well (or at least, I thought I did).

Still, they looked rather different from how I remembered them, and I could not tell who was who. A bit of further investigation resolved the puzzle: the image was AI-generated picture of two mathematicians discussing a mathematical problem. Apparently, in the age of AI, mathematical breakthroughs are still achieved by real people—but their photographs are optional.

AI and mathematics

I recently asked on MathOverflow about AI applications in mathematics and received interesting answers.

Posted in AI | Tagged , , , , | Leave a comment

Combinatorics News

Happy new 2026 to all our readers!

Here are some very interesting combinatorics results that I learned about recently. I will try to write in some more details about some of them later on. 

1) Gaussian random graphs and Ramsey numbers, by Zach Hunter, Aleksa Milojević, Benny Sudakov  

This paper gives a simple proof of the recent and remarkable exponential improvement in Ramsey lower bounds obtained by Ma, Shen, and Xie. An alternative construction based on Gaussian random graphs simplifies the analysis of Ma, Shen, and Xie and also leads to improved quantitative bounds.

See also my July 2025 post: Amazing: Jie Ma, Wujie Shen, and Shengjie Xie Gave an Exponential Improvement for Ramsey Lower Bounds.

2) On the “second” Kahn–Kalai Conjecture by Quentin DubroffJeff KahnJinyoung Park

The expectation threshold conjecture (also known as the Kahn–Kalai conjecture) was raised by Jeff Kahn and me in 2006. Our paper stated two conjectures: the first concerned general Boolean functions, and the second concerned graphs. The second conjecture did not quite follow from the first; this implication was posed as a third conjecture.

In this paper, Quentin, Jeff, and Jinyoung settle the second conjecture up to a (\log n)^2 factor.

See also my 2022 post: Amazing: Jinyoung Park and Huy Tuan Pham settled the expectation threshold conjecture! (The slide there actually state the second conjecture.) 

3) Talagrand’s convolution conjecture up to loglog via perturbed reverse heat, by Yuansi Chen

Yuansi proves Talagrand’s convolution conjecture from 1989 (The third problem in this list)  up to a loglog factor. 

4) Sharp Fuss-Catalan thresholds in graph bootstrap percolation, by Zsolt Bartha, Brett Kolesnik, Gal Kronenberg, Yuval Peled

A graph G is K_r– weakly saturated if you can add edges to it one by one where with every added edge, a new copy of K_r is created. This notion, introduced by Béla Bollobás in 1967, is closely related to bootstrap percolation.

Starting with a random graph G \sim G(n,p), what is the critical value of p that guarantees that G is K_r– weakly saturated? Solving the central problem in the field the paper identifies the threshold probability for  K_r– weak saturation for r \ge 5.

5) When does a tree activate the random graph? by Asaf Cohen Antonir, Yuval Peled, Asaf Shapira, Mykhaylo Tyomkyn, Maksim Zhukovskii

Every tree is weakly K_3 saturated. Korándi and Sudakov asked what is the smallest value of p such that, with high probability, for every random graph G \in G(n,p), there exists a spanning tree T such that one can add edges to T to obtain G, in such a way that each added edge creates a new triangle.  

The main result of the paper is that the critical p is of order n^{-1/3-o(1)}. The proof relies on recent advances in the understanding of the fundamental group of random complexes. 

Image

6) Exponential anticoncentration of the permanent, by Zach HunterMatthew KwanLisa Sauermann

The probability that a random \pm 1 matrix has vanishing determinant is exponentially small (And by now we know a lot on how small.)  This represents a beautiful story starting with Komlos. The paper proves that the probability of vanishing permanents is also exponentially small! 

For more background see this post

7) Disproof of the Odd Hadwiger Conjecture, by Marcus KühnLisa SauermannRaphael Steiner, and Yuval Wigderson

Hadwiger’s conjecture asserts that if \chi (G)\ge t then G contains K_{t} as a minor. Stronger statement were known for t=3 (easy) and t=4 (ingenious), leading Gerard and Seymour to define (1993) the notion of odd minors and propose a strengthening for Hadwiger’s conjecture—the odd Hadwiger conjecture.

This conjecture is disproved by Marcus, Lisa, Raphael, and Yuval, who show that there exist graphs which do not contain K_t as an odd minor and whose chromatic number is at least \frac 32-o(1))t

8) A proof of the Kim-Vu sandwich conjecture by Natalie BehagueDaniel Il’kovičRichard Montgomery

Finally, this paper resolves a twenty-year-old conjecture of Kim and Vu relating properties of random regular graphs—a notoriously difficult model—to those of the Erdős–Rényi random graph model G(n,p).

Posted in Combinatorics | 1 Comment