Every measurement, whether in physics, statistics, or machine learning, comes with a cost. From Heisenberg’s uncertainty principle to the limits of data prediction, Professor Xiao-Li Meng reminds us that knowledge itself is bounded by trade-offs. Precision and uncertainty are not opposites, they are partners in the same dance. In science, as in life, there is no free lunch.
Hi! PARIS Summer School 2025Speaker Insight – Aymeric Dieuleveut, École polytechnique As machine learning systems become embedded in critical decisions, from finance to infrastructure, the need for trustworthy, interpretable predictions has never been greater. Aymeric Dieuleveut, Professor of Statistics and Machine Learning at École polytechnique and scientific co-director of the Hi! PARIS Center, believes the key lies not in the models themselves, but in how we communicate their uncertainty. At this year’s Hi! PARIS Summer School, Dieuleveut introduced participants to conformal prediction a statistical framework designed to make machine learning outputs more transparent, reliable, and ready for real-world deployment. Key Takeaways Conformal prediction provides a flexible way to quantify uncertainty around machine learning predictions, offering guarantees that are easy to interpret. Rather than replacing existing models, conformal methods build on top of any trained predictor, including black-box models, probabilistic forecasts, or quantile regressors. Several trade-offs structure the design of conformal methods, especially between computational efficiency and statistical robustness. This approach has already been deployed in real-world applications, such as energy price forecasting at EDF. Conformal prediction is part of a broader ecosystem of methods, alongside privacy, decentralization, and robustness, that aim to build public trust in AI systems. Aymeric Dieuleveut at the Hi! PARIS Summer School 2025 Moving beyond the single prediction At its core, conformal prediction challenges a basic assumption in machine learning: that a model should produce a single best guess. Instead, it offers prediction sets, ranges or intervals, with statistical guarantees that the true value lies within. For Dieuleveut, this marks a shift not only in method, but in mindset. “When we make predictions with black-box models, we often don’t know how reliable the outputs are,” he explained. “Conformal prediction helps us go beyond that, to actually measure the uncertainty in a principled way.” Exploring the trade-offs During his tutorial, Dieuleveut walked participants through the two key trade-offs involved in designing conformal prediction methods. The first is between computational cost and statistical efficiency. Some variants, such as split conformal prediction, are simple and fast. Others offer stronger guarantees but require more intensive computation. The second trade-off concerns the strength of the guarantee. Most conformal methods ensure what’s known as marginal validity, meaning the coverage guarantee holds on average. But newer methods are moving toward conditional validity, where the coverage depends on specific conditions or inputs. “This is a subtle but important evolution,” Dieuleveut noted. “It brings us closer to more personalized, context-aware uncertainty estimates.” Aymeric Dieuleveut at the Hi! PARIS Summer School 2025 From energy markets to model deployment Conformal prediction isn’t just a theoretical construct, it’s already in use. One example Dieuleveut highlighted comes from the PhD work of Margaux Zaffran, conducted with EDF. By applying conformal methods to electricity price forecasts, her work helped quantify uncertainty in a domain where stakes are high and volatility is common. As Dieuleveut emphasized, this is one of the most compelling strengths of conformal prediction: it’s model-agnostic and ready to plug into existing systems. “People don’t want to retrain their entire model pipeline just to estimate uncertainty,” he said. “Conformal prediction allows them to add this layer on top.” Part of a broader trust ecosystem In a broader sense, conformal prediction is one piece of a larger puzzle. Alongside techniques focused on privacy, robustness, and decentralization, it contributes to building trust in AI systems. Each of these methods tackles a different dimension, privacy protects data, robustness handles adversaries, decentralization enables learning across networks, but all share a common goal: making machine learning models more reliable and aligned with real-world constraints. Dieuleveut also noted that, methodologically, these areas are deeply connected. Many draw from shared optimization principles and can be applied using overlapping toolkits. Compatible, not competitive One misconception Dieuleveut addressed during his session is the idea that conformal prediction is at odds with Bayesian or probabilistic approaches. In fact, the opposite is true. Conformal methods are often complementary, enhancing existing models rather than replacing them. “You can apply conformal prediction to virtually any trained model,” he explained. “That’s why it’s so powerful, it doesn’t throw away years of progress in other domains. It builds on them.” In a landscape where model reuse is critical and deployment pipelines are complex, that kind of adaptability isn’t just convenient, it’s essential.
The call is open only to Hi! PARIS has launched the 2026 Internal Fellowship call to support long-term research and teaching in AI and Data Analytics for Science, business and society. The program provides funding for internal researchers from the Hi! PARIS Cluster 2030 and offers an annual budget with flexibility in allocation between salary, research activities, scientific event organization, and PhD student funding. Eligibility The call is open only to professors and researchers from: Institut Polytechnique de Paris schools: École Polytechnique, ENSTA, École des Ponts ParisTech (ENCP) ENSAE Paris, Télécom Paris, Télécom SudParis HEC Paris Inria (Centre Inria de IP Paris) CNRS affiliated Teams within the Hi! PARIS Cluster 2030 External candidacies are not eligible. Deadline January 8, 2026 – 1:00 PM (Paris time) Researchers are encouraged to apply and contribute to advancing interdisciplinary AI & Data Analytics research with societal impact. See details & application materials
At this year’s Hi! PARIS Summer School, Solenne Gaucher (École polytechnique) shed light on the growing challenge of fairness in AI. As algorithms trained on biased data shape decisions at scale, she reminded us that fairness is neither only a mathematical problem nor only an ethical one. Instead, it sits at the intersection of both, and demands attention from scientists, policymakers, and society alike.
From October 19 to 25, Hi! PARIS researchers will be in Honolulu, Hawaii, for the International Conference on Computer Vision (ICCV 2025), one of the most important gatherings worldwide in the field. 10 papers from Hi! PARIS affiliated teams have been accepted this year, a recognition of the quality of our work across partner institutions.
We are proud to announce that Anna Korba, Assistant Professor in Statistics at CREST-GENES, Professor at ENSAE Paris, and Hi! PARIS Affiliate, has been awarded a European Research Council (ERC) Starting Grant for her project OptInfinite.
Optimal Transport for Machine Learning is in the spotlight of the Hi! PARIS Reading groups in October-December 2025, a scientific networking action gathering affiliates and corporate donors around important topics of the moment!
At this year’s Hi! PARIS Summer School, Anna Korba (ENSAE Paris) took a fresh look at Langevin diffusions, an old idea from physics that’s quietly becoming central to generative modeling. As machine learning and mathematics increasingly overlap, she invites us to pay closer attention to what’s happening under the hood of today’s most talked-about models.
AI is no longer a support tool in biology, it’s becoming a scientific partner.”
At the Hi! PARIS Summer School 2025, Jean-Philippe Vert (Bioptimus) explored how AI is impacti biomedical research. From protein folding breakthroughs like AlphaFold to in silico simulations of disease, Vert made the case for a new era where biology’s complexity meets AI’s learning power. The next frontier? Models that understand life across molecules, cells, and entire organisms.
Ioannis Stefanou, new Hi! PARIS Chair-holder and Professor at ENSTA, is combining AI, mechanics, and physics to advance the energy transition. His work explores how technologies like geothermal energy and hydrogen storage can be modeled through physics-informed AI, creating smarter, more sustainable systems for the future.