What happens when artificial intelligence pushes the boundaries of image creation from flat, 2D visuals into fully controllable 3D scenes? In their work, Maks Ovsjanikov (Professor at École polytechnique) and Léopold Maillard (PhD Student at École polytechnique), introduce LACONIC, a new 3D layout adapter, pushing generative image models into real 3D. Built on top of existing diffusion models, it keeps the same scene consistent across different camera angles, lets you move the camera freely, and even edit specific objects, all without heavy retraining. In practical terms, it closes the gap between today’s 2D image generation and true 3D control. The result: faster, cheaper, more editable visuals, and a step toward fully controllable 3D content for design, gaming, and visual production. Key takeaways Most text-to-image models are stuck in 2D, they can’t keep scenes consistent across viewpoints or let you edit objects like real things. LACONIC brings explicit 3D layouts, so you can move the camera, tweak individual objects, and keep the scene coherent, using a lightweight adapter instead of retraining whole models. This unlocks consistent multi-view generation and precise per-object control at scale. Beyond 2D: the heart of LACONIC For the LACONIC team, the old ways of generating images felt limiting. Existing systems lacked any real understanding of how objects lived in a 3D world. They could draw a bedroom, but couldn’t let you “walk” around it, shift the furniture, or change the style and colors from one angle to the next. LACONIC solves this by taking in explicit layout information, converting it into images that remain realistic no matter the direction or perspective chosen. This isn’t just a technical leap, it’s a step towards truly interactive digital creativity, and towards making generative AI useful in domains ranging from cinematic production to architectural design and virtual reality. Conflicting goals and new power LACONIC’s innovation rests in flexible scene editing. Where traditional diffusion approaches might struggle to adapt a scene to different styles, epochs, or user requests, LACONIC embraces per-object and semantic edits: you can shift furniture, change the size of items, swap colors, and adjust the overall look of a room just by changing the underlying 3D layout or object labels. This flexibility means image generation can become iterative and collaborative, with stronger control and fewer unwanted surprises for end-users. Why lightweight matters: efficiency & collaboration One of LACONIC’s hallmark qualities is that it only fine-tunes a small adapter, not the whole model. This keeps the method efficient and adaptable, avoiding the heavy computational costs that so often block research deployment and real-world adoption. For the next wave of creators, this represents not just a technical upgrade, but an invitation to push the boundaries of what AI-generated imagery can become. A call for the next generation Looking ahead, LACONIC points the way toward a new era in text-to-image synthesis: one where models understand space, structure, and interaction, and where users can guide, edit, and refine images with detailed realism. There are still challenges to solve, from generalization to ethical considerations, but for students, makers, and technologists, this work highlights a dynamic field, filled with open questions and creative opportunities.
Hi! PARIS Summer School 2025Speaker Insight – Aymeric Dieuleveut, École polytechnique As machine learning systems become embedded in critical decisions, from finance to infrastructure, the need for trustworthy, interpretable predictions has never been greater. Aymeric Dieuleveut, Professor of Statistics and Machine Learning at École polytechnique and scientific co-director of the Hi! PARIS Center, believes the key lies not in the models themselves, but in how we communicate their uncertainty. At this year’s Hi! PARIS Summer School, Dieuleveut introduced participants to conformal prediction a statistical framework designed to make machine learning outputs more transparent, reliable, and ready for real-world deployment. Key Takeaways Conformal prediction provides a flexible way to quantify uncertainty around machine learning predictions, offering guarantees that are easy to interpret. Rather than replacing existing models, conformal methods build on top of any trained predictor, including black-box models, probabilistic forecasts, or quantile regressors. Several trade-offs structure the design of conformal methods, especially between computational efficiency and statistical robustness. This approach has already been deployed in real-world applications, such as energy price forecasting at EDF. Conformal prediction is part of a broader ecosystem of methods, alongside privacy, decentralization, and robustness, that aim to build public trust in AI systems. Aymeric Dieuleveut at the Hi! PARIS Summer School 2025 Moving beyond the single prediction At its core, conformal prediction challenges a basic assumption in machine learning: that a model should produce a single best guess. Instead, it offers prediction sets, ranges or intervals, with statistical guarantees that the true value lies within. For Dieuleveut, this marks a shift not only in method, but in mindset. “When we make predictions with black-box models, we often don’t know how reliable the outputs are,” he explained. “Conformal prediction helps us go beyond that, to actually measure the uncertainty in a principled way.” Exploring the trade-offs During his tutorial, Dieuleveut walked participants through the two key trade-offs involved in designing conformal prediction methods. The first is between computational cost and statistical efficiency. Some variants, such as split conformal prediction, are simple and fast. Others offer stronger guarantees but require more intensive computation. The second trade-off concerns the strength of the guarantee. Most conformal methods ensure what’s known as marginal validity, meaning the coverage guarantee holds on average. But newer methods are moving toward conditional validity, where the coverage depends on specific conditions or inputs. “This is a subtle but important evolution,” Dieuleveut noted. “It brings us closer to more personalized, context-aware uncertainty estimates.” Aymeric Dieuleveut at the Hi! PARIS Summer School 2025 From energy markets to model deployment Conformal prediction isn’t just a theoretical construct, it’s already in use. One example Dieuleveut highlighted comes from the PhD work of Margaux Zaffran, conducted with EDF. By applying conformal methods to electricity price forecasts, her work helped quantify uncertainty in a domain where stakes are high and volatility is common. As Dieuleveut emphasized, this is one of the most compelling strengths of conformal prediction: it’s model-agnostic and ready to plug into existing systems. “People don’t want to retrain their entire model pipeline just to estimate uncertainty,” he said. “Conformal prediction allows them to add this layer on top.” Part of a broader trust ecosystem In a broader sense, conformal prediction is one piece of a larger puzzle. Alongside techniques focused on privacy, robustness, and decentralization, it contributes to building trust in AI systems. Each of these methods tackles a different dimension, privacy protects data, robustness handles adversaries, decentralization enables learning across networks, but all share a common goal: making machine learning models more reliable and aligned with real-world constraints. Dieuleveut also noted that, methodologically, these areas are deeply connected. Many draw from shared optimization principles and can be applied using overlapping toolkits. Compatible, not competitive One misconception Dieuleveut addressed during his session is the idea that conformal prediction is at odds with Bayesian or probabilistic approaches. In fact, the opposite is true. Conformal methods are often complementary, enhancing existing models rather than replacing them. “You can apply conformal prediction to virtually any trained model,” he explained. “That’s why it’s so powerful, it doesn’t throw away years of progress in other domains. It builds on them.” In a landscape where model reuse is critical and deployment pipelines are complex, that kind of adaptability isn’t just convenient, it’s essential.
Hi!ckathon 2022 challenge, opening on March 4.
The call is open only to Hi! PARIS has launched the 2026 Internal Fellowship call to support long-term research and teaching in AI and Data Analytics for Science, business and society. The program provides funding for internal researchers from the Hi! PARIS Cluster 2030 and offers an annual budget with flexibility in allocation between salary, research activities, scientific event organization, and PhD student funding. Eligibility The call is open only to professors and researchers from: Institut Polytechnique de Paris schools: École Polytechnique, ENSTA, École des Ponts ParisTech (ENCP) ENSAE Paris, Télécom Paris, Télécom SudParis HEC Paris Inria (Centre Inria de IP Paris) CNRS affiliated Teams within the Hi! PARIS Cluster 2030 External candidacies are not eligible. Deadline January 8, 2026 – 1:00 PM (Paris time) Researchers are encouraged to apply and contribute to advancing interdisciplinary AI & Data Analytics research with societal impact. See details & application materials
At this year’s Hi! PARIS Summer School, Solenne Gaucher (École polytechnique) shed light on the growing challenge of fairness in AI. As algorithms trained on biased data shape decisions at scale, she reminded us that fairness is neither only a mathematical problem nor only an ethical one. Instead, it sits at the intersection of both, and demands attention from scientists, policymakers, and society alike.
The Hi! PARIS AI Seminar Cycle is a monthly series showcasing leading research in Artificial Intelligence and Data Science. Held on the first Wednesday of each month, it brings together top scholars, students, and partners to explore AI’s scientific, business, and societal impact across key themes such as foundation models, trustworthy AI, and AI for science and engineering.
From October 19 to 25, Hi! PARIS researchers will be in Honolulu, Hawaii, for the International Conference on Computer Vision (ICCV 2025), one of the most important gatherings worldwide in the field. 10 papers from Hi! PARIS affiliated teams have been accepted this year, a recognition of the quality of our work across partner institutions.
The Career Fair event organized by Hi! PARIS Center offers our students the opportunity to identify potential future internships and job opportunities in AI and Data Science, receive career advice, and engage in discussions with the participating companies and startups.
We are proud to announce that Anna Korba, Assistant Professor in Statistics at CREST-GENES, Professor at ENSAE Paris, and Hi! PARIS Affiliate, has been awarded a European Research Council (ERC) Starting Grant for her project OptInfinite.
Optimal Transport for Machine Learning is in the spotlight of the Hi! PARIS Reading groups in October-December 2025, a scientific networking action gathering affiliates and corporate donors around important topics of the moment!