mirror of the warmongers

Posted in Books, Kids, pictures with tags , , , , , , , , , , , , , , , , on March 11, 2026 by xi'an

The New York Times reporter Anton Troianovski drew a ghastly parallel between the declarations of Russian officials on the invasion of Ukraine and of US officials on the attacks on Iran.  As a background for the lack to a long term strategy behind these attacks and the highly unlikely emergence of an Iranian democracy…

Image“[the United States] didn’t start this war, but under President Trump, we are finishing it.” — US Defense Secretary Pete Hegseth

We didn’t start the so-called war in Ukraine. Rather, we are trying to finish it.” — President Vladimir V. Putin

ImageThis is a special military operation. If Russia had started a full-scale war, it would have been over long ago.” Duma Speaker Vyacheslav Volodin

 “I think it’s an operation.” — Speaker of the US Congress Mike Johnson

ImageWe haven’t even yet started anything in earnest.” — President Vladimir V. Putin

We haven’t even started hitting them hard.” — President Donald Trump

Image

I am appealing to the military of the armed forces of Ukraine. Do not allow neo-Nazis and Banderites to use your children, wives and elders as a live shield. Take power into your own hands.” — President Vladimir V. Putin

I call upon all Iranian patriots who yearn for freedom to seize this moment to be brave, be bold, be heroic, and take back your country” — President Donald Trump

the leaders of France, Germany and the UK (…) are the ones that are wrong by refusing to come to the Iranian people’s aid and adding insult to injury, you’re suggesting we should continue to negotiate with religious Nazis.” — US Senator Lindsay Graham

Image
While the NYT article does not get into a similar mirroring with Israel official statements on their attacks on Iran and Lebanon, they also follow the very same warlike rhetoric…

mostly Monte Carlo [13/03]

Posted in Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , on March 10, 2026 by xi'an

Image

A new episode of our mostly Monte Carlo seminar, very soon coming near you (if in Paris):

On Friday 13/02/26, from 3-5pm at PariSanté Campus

15h00: Pierre Del Moral (INRIA, Bordeaux)

On the Kantorovich contraction of Markov semigroup

We present a novel operator theoretic framework to study the contraction properties of Markov semigroups with respect to a general class of Kantorovich semi-distances, which notably includes Wasserstein distances. This rather simple contraction cost framework combines standard Lyapunov techniques with local contraction conditions. Our results can be applied to both discrete time and continuous time Markov semigroups, and we illustrate their wide applicability in the context of (i) Markov transitions on models with boundary states, including bounded domains with entrance boundaries, (ii) operator products of a Markov kernel and its adjoint, including two-block-type Gibbs samplers, (iii) iterated random functions and (iv) diffusion models, including overdampted Langevin diffusion with convex at infinity potentials.

16h00: Bob Carpenter (Flatiron Institute, New York)

GIST, WALNUTS, and Continuous Nutpie: mass-matrix and step-size adaptation for Hamiltonian Monte Carlo

I will introduce Gibbs self tuning (GIST), our new technique for coupling tuning parameters and conditionally Gibbs-sampling them per iteration in Hamiltonian Monte Carlo. Then I will turn to the within-orbit adaptive NUTS (WALNUTS) sampler, which adapts the step size every leapfrog step in order to conserve the Hamiltonian. Empirical evaluations on varying multi-scale target distributions, including Neal’s funnel and the Stock-Watson stochastic volatility time-series model, demonstrate that WALNUTS achieves substantial improvements in sampling efficiency and robustness. I will review the Nutpie mass-matrix adaptation scheme, which is designed to minimize Fisher divergence by estimating the mass matrix as the geometric midpoint (aka barycenter) between the inverse covariance of the draws and the covariance of the scores of the draws. Then I will describe a continuously adapting version that adapts per iteration by continuously discounting the past rather than updating in fixed blocks. I will also show how the Adam optimizer outperforms dual averaging for step-size adaptation. I will conclude by considering a lock-free multi-threading implementation that automatically monitors adaptation and sampling for convergence for automatic stopping.

OWABI⁷, 25 March 2026: Robust Simulation Based Inference (10am EST time)

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on March 9, 2026 by xi'an

Image

Speaker:  Larry Wasserman (Carnegie Mellon University)

Title: Robust Simulation Based Inference
Abstract: Simulation-Based Inference (SBI) is an approach to statistical inference where simulations from an assumed model are used to construct estimators and confidence sets. SBI is often used when the likelihood is intractable and to construct confidence sets that do not rely on asymptotic methods or regularity conditions. Traditional SBI methods assume that the model is correct, but, as always, this can lead to invalid inference when the model is misspecified. This paper introduces robust methods that allow for valid frequentist inference in the presence of model misspecification. We propose a framework where the target of inference is a projection parameter that minimizes a discrepancy between the true distribution and the assumed model. The method guarantees valid inference, even when the model is incorrectly specified and even if the standard regularity conditions fail. Alternatively, we introduce model expansion through exponential tilting as another way to account for model misspecification. We also develop an SBI based goodness-of-fit test to detect model misspecification. Finally, we propose two ideas that are useful in the SBI framework beyond robust inference: an SBI based method to obtain closed form approximations of intractable models and an active learning approach to more efficiently sample the parameter space.
Keywords: Exponential tilting, model misspecification, robust inference, simulation based inference, valid inference.
Reference: Lorenzo Tomaselli, Valérie Ventura, Larry Wasserman. Robust Simulation Based Inference. Preprint at ArXiv:2508.02404

International Women’s Day

Posted in Kids, pictures, Travel with tags , , , , , , on March 8, 2026 by xi'an

Image

robust simulation-based inference

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , on March 7, 2026 by xi'an

ImageThis new arXival by Lorenzo Tomaselli, Valérie Ventura, and Larry Wasserman (from CMU) considers simulation-based inference under model misspecification (as we did for ABC in our 2020 Series B paper). Which is almost always the case. In the paper, SBI is defined as producing N parameters and N samples from the prior and the corresponding sampling distribution, respectively, and then doubling the resulting samples by permuting at random the parameters θ. This means that the second half is distributed from the product of the prior and of the marginal, hence that the classification odds ratio is equal to the likelihood, hence providing an estimation method (andlikelihood trick) à la Geyer. From this estimate, an ABC p-value can be derived, but it is incorrect as such when the model is misspecified. Hence the use of the Hellinger discrepancy, the power divergence and the kernel distance (or MMD) as alternatives to the misspecified MLE.

The paper then expands on approximating density ratios by virtue of a reproducing kernel Hilbert space, using a Gaussian kernel. (With a nice remark on requiring only one single ratio estimator for all values of θ, albeit in the joint space.) And focus on a studentized MMD estimator (à la e-value) to build a confidence set that remains valid under model misspecification. And without regularity assumptions.

Another approach is further explored, based on exponential tilting—of which I am not a great fan, from being highly dependent on the choice of the pseudo-sufficient statistic to require an intractable normalising constant, to requiring an extra optimization, even though I appreciate the mathematical appeal of the construct. Which seems to require a sample simulation for each value of θ at the learning stage, albeit relying on the same likelihood trick. The appropriateness of the tilting can be tested by a goodness of fit test tailored for the SBI structure, which sounds rather greedy in the required simulations. 

Besides the g-and-k distribution example (which, as pointed out several times on the ‘Og, is not intractable, strictly speaking!), the paper studies a mixture example, despite Larry dubbing them as evil as tequila a long while ago! (The paper also offers a section called accoutrements, which is my first encounter with this use of the term, usually found in medieval contexts!)

Note that Larry will present the paper at the OWABI webinar next 25 March!