At the core of Durmus’s research is a pair of increasingly prominent tools in AI: diffusion and flow generative models. These systems take random noise and transform it, step by step, into complex data, be it an image, a molecule, or even a physical simulation. Their success has been undeniable, but their foundations remain shaky.
“Right now, most of the models are designed by trial and error,” he explains. “We tweak parameters, adjust noise schedules, and hope it works. But we don’t always know why it works, or when it might fail.”
That lack of theoretical clarity has real consequences: unstable training, unpredictable failures, and challenges in extending these models to new domains like biology, economics, or climate science. Durmus’s project aims to bring order to the chaos.
To help visualize the kind of structure these models can achieve, the following figure shows a diffusion-based sampler designed to capture complex, multi-modal distributions.