Optimization over probability measures has become a cornerstone of modern machine learning and artificial intelligence. Traditional optimization techniques work with fixed datasets or parameters, but Anna Korba’s project goes further: it develops tools that operate directly on entire probability distributions, including very large or even infinite-dimensional spaces.
This framework has powerful implications for sampling tasks, generating representative examples from a distribution or model, which are critical in fields such as:
Bayesian machine learning, where sampling helps quantify uncertainty in model predictions
Generative modeling, where it enables creating realistic new data, such as images or complex biological structures
However, existing sampling and optimization techniques often face significant limitations: they can be computationally expensive, difficult to evaluate, and poorly suited to high-dimensional or complex data. Moreover, current models are mainly designed for vectorial data and struggle with more sophisticated infinite-dimensional structures.
OptInfinite aims to overcome these challenges by creating a unified theoretical and practical framework. Leveraging tools from optimal transport and information geometry, the project will:
develop more efficient and adaptable sampling algorithms
design robust evaluation methods to assess the quality of generated samples
deliver an open-source software toolkit to make these techniques widely accessible
The methods developed through OptInfinite will be tested on real-world applications, including large-scale AI models, Bayesian inference, biological systems modeling, and beyond.