Image
CMX is a research group aimed at the development and analysis of novel algorithmic ideas underlying emerging applications in the physical, biological, social and information sciences.  We are distinguished by a shared value system built on the development of foundational mathematical understanding, and the deployment of this understanding to impact on emerging key scientific and technological challenges.


Faculty

Oscar Bruno
Venkat Chandrasekaran
Thomas Hou
Houman Owhadi
Peter Schröder
Andrew Stuart
Joel Tropp

Von Karman
Instructors

Catherine Babecki
Kathrin H. Hellmuth
Sangmin Park

Postdoctoral
Researchers

Lianghao Cao
Bohan Chen
Michael Sleeman
George Stepaniants
So Takao
Margaret Trautner
Claire Valva
Xianjin Yang

Grad Students

Theo Bourdais
Edoardo Calvello
Matthieu Darcy
Yasamin Jalalian
Dohyeon Kim
Jonghyeon Lee
Eitan Levin
Huiwen Lu
Haakon Ludvig Ervik
Elvira Moreno
Mayank Raj
Sabhrant Sachan
Manuel Santana
Peicong Song
Chuwei Wang
Yixuan (Roy) Wang
Florian Wolf
Changhe Yang
Jennifer Ying

Lunch Seminars

(Will be held at 12 noon PST in ANB 213 or on Zoom when noted)


October 7, 2025
Jinghao Cao [Caltech]

▦ Analytical and computational methods for metamaterials ▦


Metamaterials enable wave phenomena far beyond natural materials, yet their analysis requires tools that capture high-contrast resonances, time modulation, and non-Hermitian effects. I will present recent progress on the PDE analysis of these systems, highlighting asymptotic methods and spectral theory for resonant and modulated media. On the computational side, I will introduce fast Fourier-based quadrature schemes and accelerated solvers for boundary integral formulations. These approaches make large-scale simulations of complex metamaterials feasible while retaining analytical precision. Together, they bridge rigorous mathematics and scalable computation, offering predictive models for applications in acoustics, photonics, and sustainable materials design.


October 21, 2025
Yifan Chen [University of California, Los Angeles]

▦Exploring high dimensions in dynamical sampling:flattening the scaling curve ▦


Dynamical sampling of probability distributions based on model or data (i.e., generative modeling) is a central task in scientific computing and machine learning. I'll present recent work on understanding and improving algorithms in high-dimensional settings. This includes a novel "delocalization of bias" phenomenon in Langevin dynamics, where biased methods could achieve dimension-free scaling for low-dimensional marginals while unbiased methods cannot-a finding motivated by molecular dynamics simulations. I'll also briefly mention a new unbiased affine-invariant Hamiltonian sampler that outperforms popular samplers in emcee package (routinely used in astrophysics literature) in high dimensions, and introduce optimal Lipschitz energy criteria for design of measure transport in generative modeling of multiscale scientific data, as alternative to optimal kinetic energy in optimal transport. These examples show how dimensional scaling could be flattened, allowing efficient stochastic algorithms for high-dimensional sampling and generative modeling in relevant scientific applications.


November 4, 2025
Ousmane Kodio [University of California, Santa Barbara]

▦ Pattern Formation in Soft Mechanics ▦


Elastic instabilities are ubiquitous in natural and engineered systems across a wide range of scales-from supercoiled DNA and folded tissues to flower petals and deployable space structures. While great progress has been made over the past two centuries in predicting the equilibrium shapes of stressed materials, the dynamics of buckling and wrinkling remain rich with theoretical and computational challenges.

In this talk, I will present our recent theoretical and experimental efforts to understand how elastic patterns evolve when driven far from equilibrium by mechanical and hydrodynamic instabilities. In the first part, I will discuss the evolution of wrinkle patterns of confined elastic membranes floating on fluid surfaces, showing how confinement slows down pattern selection and leads to departures from the self-similar behaviors familiar in fluid mechanics. In the second part, I will demonstrate how rapid quenching can trigger the emergence of nontrivial buckling modes, and how tuning external control parameters enables the targeted selection of specific patterns. This phenomenon-reminiscent of the Kibble-Zurek mechanism in continuous non-equilibrium phase transitions-opens new avenues for the dynamical design of elastic patterns.


November 18, 2025
Chris Vales [Dartmouth College]

▦Data driven dynamical closure of partial differential equations ▦


I present a data driven dynamical closure scheme for problems governed by partial differential equations. The scheme employs the operator theoretic framework of quantum mechanics to embed the original classical dynamics into an infinite dimensional dynamical system, using the space of quantum states to model the unresolved degrees of freedom of the original dynamics and the quantum Bayes rule to predict their contributions to the resolved dynamics. To realize the scheme numerically, the embedded dynamics is projected to finite dimension by a positivity preserving discretization, leading to a finite dimensional representation that is invariant under the dynamical symmetries of the resolved dynamics. I show numerical results of the application of the developed scheme to a closure problem for the shallow water equations, demonstrating the accurate prediction of the resolved dynamics for out of sample initial conditions.  


January 13, 2026
Keaton Burns [MIT Math ]

▦ Solving PDEs exactly over polynomials ▦


Global spectral methods are a classical technique for solving partial differential equations (PDEs) in simple geometries, including Fourier series in periodic domains and orthogonal polynomials in bounded domains. While traditional "collocation" approaches for polynomials are slow at scale, modern spectral methods reformulate these systems in sparse operator form, enabling fast and accurate solvers with near-FFT-like performance. However, such methods may suffer from poor conditioning, difficulties at coordinate singularities, and conservation issues that depend sensitively on their formulation.

In this context, we will present a "generalized tau" framework that unifies all polynomial and trigonometric spectral methods, from classical collocation to modern "ultraspherical" schemes. In particular, we examine the exact discrete equations solved by each method and characterize their deviation from the original PDE in terms of perturbations called "tau corrections." By analyzing these corrections, we can precisely categorize existing methods and design new solvers that robustly accommodate new boundary conditions, eliminate spurious numerical modes, and satisfy exact conservation laws.

We will demonstrate the capabilities of this system as implemented in Dedalus, an open-source Python framework for solving PDEs using sparse spectral methods. Dedalus provides a symbolic equation specification system that allows users to define their own PDEs and automatically constructs optimally sparse, parallelized, and differentiable solvers tailored to the chosen equations and geometry. We will present examples combining the generalized tau method with new spectral bases for curvilinear domains, providing fast and well-conditioned solvers for general tensor-valued PDEs in cylinders, disks, spheres, and balls.

  


January 20, 2026
Angxiu Ni [University of California, Irvine]

▦ New linear response formulas with applications to variational data assimilation and generative SDE models ▦


We present several new formulas for the linear response (parameter derivatives of marginal or stationary measures) of SDEs. The formulas subsume classical approaches (path-perturbation, divergence, and kernel-differentiation) and overcome key difficulties such as chaos, high-dimensionality, and parameterized noise.

With the new adjoint path-kernel formula, we solve a challenging variational data assimilation problem where (i) the deterministic dynamics is chaotic, (ii) the objective is a single long-time function measuring mismatch in both observations and dynamics, (iii) some dynamical parameters are unknown, and (iv) the state is only partially observed.

With another divergence-kernel formula, we introduce a generative model, DK-SDE, where the model is a parameterized SDE trained by minimizing the KL divergence between the data and the SDE marginal law. The framework allows parametrizations in both drift and diffusion (enabling explicit priors in dynamics), and its gradient computation uses only forward processes, substantially reducing memory cost.

  


February 10, 2026
Nick Boffi [Carnegie Mellon University]

▦ Flow Maps: Flow-based generative models with lightning-fast inference ▦


Flow-based models have spurred a revolution in generative modeling, driving astounding advancements across diverse domains including high-resolution text to image synthesis and de-novo drug design. Yet despite their remarkable performance, inference in these models requires the solution of a differential equation, which is extremely costly for the large-scale neural network-based models used in practice. In this talk, we introduce a mathematical theory of flow maps, a new class of generative models that directly learn the solution operator for a flow-based model. By learning this operator, flow maps can generate data in 1-4 network evaluations, leading to orders of magnitude faster inference compared to standard flow-based models. We discuss several algorithms for efficiently learning flow maps in practice that emerge from our theory, and we show how many popular recent methods for accelerated inference -- including consistency models, shortcut models, align your flow, and mean flow -- can be viewed as particular cases of our formalism. We demonstrate the practical effectiveness of flow maps across several tasks including image synthesis, geometric data generation, and inference-time guidance of pre-trained text-to-image models.

  


February 17, 2026
Ricardo Baptista [University of Toronto]

▦ A Principled Framework for Discrete Diffusion Models via Denoising ▦


Discrete generative models provide a probabilistic framework for representing and sampling discrete data such as text sequences. In the continuous setting, score-based diffusion models have rapidly become state of the art for tasks involving images, video, and other continuous-valued data. A key reason for their success is that estimating the score function, the gradient of a perturbed data distribution, can be linked to a denoising problem through Tweedie's formula, which enables the use of well-established supervised methods for learning the score. In the discrete setting, diffusion models offer a promising alternative to autoregressive models for generating entire text sequences at once in large language models. However, they have not yet achieved performance comparable to autoregressive models and often require specialized loss functions and architectures to approximate quantities analogous to the score function. We begin by reviewing the mathematical formulation of discrete diffusion models and then introduce a framework that parallels continuous flow-based generative modeling. Specifically, we propose Binomial flows for non-negative ordinal data. We show that this approach provides a simple recipe for training, sampling, and computing exact likelihoods in discrete diffusion models via a discrete version of Tweedie's formula. Finally, we will demonstrate that sampling can be performed using a Poisson-Föllmer process, which has desirable theoretical properties and yields competitive performance on real-world image generation tasks.

  


March 24, 2026
Ali Pakzad [California State University of Northridge]

▦ Can Local Data Reveal Global Fluid Dynamics? ▦


Data assimilation aims to reconstruct the state of a dynamical system by combining partial observations with a mathematical model. For fluid flows governed by the incompressible Navier-Stokes equations, classical results show that coarse observations distributed across the entire spatial domain can recover the full flow through continuous nudging, known as the Azouani-Olson-Titi 2014 algorithm. In practice, however, sensor placement is often limited, and efficient reconstruction of turbulent flows requires strategic positioning of available measurements. In this talk, we challenge the standard framework by showing that it is possible to recover the full system dynamics using only local observations from a subregion of the domain. In particular, we demonstrate that achieving global accuracy does not necessarily require global data: carefully chosen localized observations can be sufficient to synchronize the model with the true flow. This naturally raises a fundamental question: given a physical domain, should the observational region be placed near the boundary or away from it? We discuss recent theoretical results and numerical experiments that aim to shed light on this question.

  


April 7, 2026
Justin Sirignano [University of Oxford]

▦ Convergence Analysis of Deep Galerkin Methods for solving PDEs and Adjoint-optimized Neural-Network PDE Models ▦


Deep Galerkin Methods (DGM) and physics-informed neural networks (PINNs) directly solve partial differential equations (PDEs) with neural networks. For linear elliptic PDEs, we prove that DGM/PINNs (despite the non-convexity of neural networks) trained with gradient descent globally converge to the PDE solution as the number of training steps and hidden units go to infinity. A key technical challenge is the lack of a spectral gap for the training dynamics of the neural network. A related application of interest in applied mathematics and engineering is using deep learning to model unknown terms within a PDE, such as closure models in large-eddy simulation (LES) and Reynolds-Averaged Navier-Stokes (RANS). The neural network terms in the PDE are optimized using adjoint PDEs, which again requires minimizing a highly non-convex objective function. Similar to the result for DGM/PINNs, we are able to prove (for semi-linear parabolic equations) that the trained neural network-PDE converges to a global minimizer. Numerical results for LES and RANS with adjoint-optimized neural network closure models will be presented for several canonical examples in fluid dynamics.

  


April 14, 2026
Silvio Barandun [Massachusetts Institute of Technology]

▦ Non-Hermitian skin effect in resonator systems ▦


The skin effect is the phenomenon whereby the bulk eigenmodes of a non-Hermitian system are all localised at one edge of an open system. I will present the mathematical theory of the non-Hermitian skin effect in systems of finitely and infinitely many sub-wavelength resonators with a non-Hermitian imaginary gauge potential and analyse its resonance in the deep subwavelength regime. I will particularly focus on localisation results arising from the Toeplitz nature of the discrete approximation of the system. A part of the presentation will touch on disordered localisation effects similar to Anderson localisation.

  


May 5, 2026
Nicholas Nelsen [Cornell University]

▦ TBA ▦


TBD

  


May 19, 2026
Elias Hess-Childs [Carnegie Mellon University]

▦ TBA ▦


TBD


Date TBD
David Mordecai [University of Chicago Booth School of Business]

▦ TBA ▦


TBD

  

  


June 9, 2026
Thomas O'Leary-Roseberry [Ohio State University]

▦ TBA ▦


TBD



Other Seminars

(Time and location vary)


December 9, 2025
• CMX Special Seminar •

ANB 105
11:00 am


Qing Qu [University of Michigan]

▦Understanding Generalization of Deep Generative Models Requires Rethinking Underlying Low-dimensional Structures ▦

Diffusion models represent a remarkable new class of deep generative models, yet the mathematical principles underlying their generalization from finite training data are poorly understood. This talk offers novel theoretical insights into diffusion model generalization through the lens of "model reproducibility," revealing a surprising phase transition from memorization to generalization during training, notably occurring without the curse of dimensionality. Our theoretical framework hinges on two crucial observations: (i) the intrinsic low dimensionality of image datasets and (ii) the emergent low-rank property of the denoising autoencoder within trained neural networks. Under simplified settings, we rigorously establish that optimizing the training loss of diffusion models is mathematically equivalent to solving a canonical subspace clustering problem. This insight quantifies the minimal sample requirements for learning low-dimensional distributions, scaling linearly with the intrinsic dimension. Furthermore, by investigating this under a nonlinear two-layer network, we fully explain the memorization-to-generalization transition, highlighting inductive biases in learning dynamics and the models' strong representation learning ability. These theoretical insights have profound practical implications, enabling various applications for generation control and safety, including concept steering, watermarking, and memorization detection. This work not only advances theoretical understanding but also stimulates numerous directions for many applications in engineering and science.


June 9, 2026
• CMX Special Seminar •

ANB 213
3:00 pm


Anna Yesypenko [Ohio State University]

▦ TBA ▦


TBD



Meetings and Workshops