Week 01.02.2026 – 07.02.2026
Monday (02 Feb)
I will present recent joint work with Leandro Chiarini, Tyler Helmuth, and Ellen Powell on the hard-core model on ℤ², a model of independent sets on the square lattice. We show that under weak random disorder, this model has no phase transition in two dimensions. This behavior is known as the Imry–Ma phenomenon, whose most classical example is the random-field Ising model. Our proof is inspired by the Aizenman–Wehr argument for the random-field Ising model, but relies on spatial symmetries rather than internal spin symmetries.
Transfer operators such as the Koopman and Perron-Frobenius operators provide a powerful linear framework for analyzing nonlinear dynamical systems, with their spectral properties revealing long-term behavior and coherent and metastable structures. Classical data-driven approximation methods like EDMD for the Koopman operator, SINDy, and PDE-FIND for the data-driven discovery of governing equations of a system rely on a predefined dictionary of basis functions whose choice is highly problem-dependent and often requires domain knowledge. We propose two complementary approaches that address these challenges. First, a gradient descent-based framework for learning interpretable basis functions from data enabling adaptive dictionary construction for EDMD, SINDy, and PDE-FIND for both transfer operator approximation and discovering the governing equations. Second, a randomized neural network method, called RaNNDy, for learning transfer operators and their spectral decompositions, in which hidden-layer weights are fixed at random and only the output layer is trained. This results in significant computational advantage, closed-form expressions for the output layer of the neural network that directly represents the eigenfunctions, and enables uncertainty quantification via ensemble learning. The effectiveness of these methods is demonstrated on several examples, including ODEs, SDEs, protein folding dynamics, and the quantum harmonic oscillator.