Research
Banner

An Afternoon on Numerical Optimization and Machine Learning – Join Us at École Polytechnique!

We have the privilege of hosting a special event showcasing the latest in numerical optimization. “An Afternoon on Numerical Optimization,” set for April 5 at École Polytechnique.
Friday 5 April 2024, 1.30pm – 6.00pm
Ecole polytechnique, Palaiseau (Amphitheater Becquerel)
(on site)

On the agenda:

  • 13:30: Andrea Simonetto,  “Flexible Optimization for Cyber-Physical and Human Systems.”
  • 14:10: Antonin Chambolle, “On the jump set of total variation regularized images.”
  • 14:45: Laurent Lessard,  “Automated Analysis and Synthesis of Distributed Optimization Algorithms.”
  • 15:20-16:00: Networking Opportunity and coffee break.
  • 16:00-18:00:  Thesis Defense, Baptiste Goujaud, “Constructive approaches to worst-case complexity analyses of gradient methods for convex optimization: contributions, new insights, and novel results.”
Speakers & Schedule:
  • 13:30: Andrea Simonetto, ENSTA, discusses “Flexible Optimization for Cyber-Physical and Human Systems.” Learn more about Andrea here.

Bio: Andrea Simonetto obtained his Master’s Degree in Space Engineering from both Politecnico di Milano and Politecnico di Torino with honors, in April 2008, after four months spent at the Robotics Institute, Carnegie Mellon University, USA. He also obtained the Alta Scuola Politecnica diploma, in June 2008.

From September 2008 to November 2012, Andrea was with the Delft Center for Systems and Control at Delft University of Technology working towards his PhD degree, which he successfully defended on the 5th of November 2012. During this period Andrea spent a three-month visit at KTH, Stockholm, Sweden.

He then spent 3+1 years as a Post-Doctoral researcher. In particular, from January 2013 to February 2016, Andrea was with the Circuits and Systems Group at Delft University of Technology, while collaborating with UPenn, Philadelphia, USA. And, from March 2016 to January 2017, he was with the ICTEAM Institute at UCLouvain.

Subsequently, from February 2017 to August 2021, Andrea was a research staff member with the optimization and control group, then the AI and Quantum group at IBM Research Ireland.

And, finally, in September 2021, Andrea became a research professor at ENSTA Paris, defending his HDR on the 9th of September 2022 at the Institut Polytechnique de Paris.

Andrea was awarded the DISC Ph.D. Thesis Award 2012 and the Politecnico gold medal as the best graduate student from Politecnico di Milano in 2008. He received an honorable mention at the 33rd International Physics Olympiad (IPhO) in Bali, in 2002, and took part in the national Italian Mathematics Olympiad, in 2001.

Title: Flexible Optimization for Cyber-Physical and Human Systems

Abstract:Can we allow humans to pick among different, yet reasonably similar, decisions? Are we able to construct optimization problems whose outcomes are sets of feasible, close-to-optimal decisions for human users to pick from, instead of a single, hardly explainable, do-as-I-say “optimal” directive?

In this paper, we explore two complementary ways to render optimization problems stemming from cyber-physical applications flexible. In doing so, the optimization outcome is a trade-off between engineering best and flexibility for the users to decide to do something slightly different. The first method is based on robust optimization and convex reformulations. The second method is stochastic and inspired by stochastic optimization with decision-dependent distributions. 

  • 14:10: Antonin Chambolle from Université Paris Dauphine, a researcher in analysis and numerical optimization. For more on Antonin, visit his page.

Bio: Antonin Chambolle is a researcher in analysis interested in numerical analysis, geometric analysis, and the calculus of variations. He studies problems with free discontinuities, free boundaries, or interfaces coming from applications such as image processing and reconstruction, physics, and materials science. He is also interested in numerical optimization for non-smooth problems related to these topics.

After having spent many years with CMAP, Ecole Polytechnique, Antonin is now a member of the applied maths department CEREMADE at Dauphine / PSL University.

His recent work focuses on some limiting energies for liquid crystals, the study of “GSBD” functions and the Griffith problem in fracture mechanics, some interfacial evolution problems (anisotropic partitions, Wasserstein flow, crystalline curvature flow), the discretization of the total variation and some techniques for differentiation in convex optimization.

Title: On the jump set of total variation regularized images

Abstract: With Michał Łasica (Warsaw) we introduce a new, simpler method for analyzing the jump set of minimizers of the total variation with a quadratic penalization (the celebrated “Rudin-Osher-Fatemi” denoising problem). We give a simple proof that no spurious jumps are created, which applies to much more many settings than the previous, in particular to multispectral or color signals depending on the definition of the corresponding total variation.

  • 14:45: Laurent Lessard, Northeastern University, presenting on “Automated Analysis and Synthesis of Distributed Optimization Algorithms.” Discover more about Laurent here.

Bio: Laurent Lessard is an Associate Professor of Mechanical and Industrial Engineering at Northeastern University, with courtesy appointments in ECE and CS. Before joining Northeastern, Laurent was a Charles Ringrose Assistant Professor of Electrical and Computer Engineering at the University of Wisconsin–Madison and a Faculty Member at the Wisconsin Institute for Discovery. Laurent received a B.A.Sc. in Engineering Science from the University of Toronto and received an M.S. and Ph.D. in Aeronautics and Astronautics at Stanford University, advised by Sanjay Lall and co-advised by Matthew West. After completing his doctoral work, Laurent was an LCCC Postdoc at Lund University, Sweden, where he worked with Anders Rantzer. Subsequently, Laurent was a postdoctoral researcher at the University of California, Berkeley, where he worked in the Berkeley Center for Control and Identification with Andrew Packard and also collaborated with Ben Recht and others in the AMPLab. While at Berkeley, he was also partially supported by Kameshwar Poolla. Laurent is a recipient of the Hugo Schuck Best Paper Award and the NSF CAREER award. His research interests include decentralized control, robust control, optimization, and machine learning.

Title: Automated analysis and synthesis of distributed optimization algorithms

Abstract: In distributed optimization, a network of agents, such as computing nodes, robots, or mobile sensors, work collaboratively to optimize a global objective. Each agent knows a local function and must find a minimizer of the sum of all agents’ local functions by performing a combination of local gradient evaluations and communicating information with neighboring agents based on some directed communication graph. Several different algorithms have been proposed that achieve linear convergence to the global optimum when the local functions are strongly convex. We provide a unified analysis that yields the worst-case linear convergence rate as a function of the condition number of the local functions, the spectral gap of the graph, and the parameters of the algorithm. The framework requires solving a small semidefinite program whose size is fixed; it does not depend on the number of local functions or the dimension of their domain. We also present a structural result regarding distributed optimization algorithms: every distributed optimization algorithm can be factored into a centralized optimization method and a second-order consensus estimator, effectively separating the “optimization” and “consensus” tasks. Conversely, any optimization method that converges in the centralized setting can be combined with any second-order consensus estimator to form a distributed optimization algorithm that converges in the distributed setting. Finally, we propose a new algorithm, which we call SVL, that is easily implementable and achieves a faster worst-case convergence rate than all other known algorithms.

  • Networking Opportunity 15:20-16:00: Join us for a coffee break, a great chance for networking and discussions.
  • Special Thesis Defense 16:00-18:00:

Baptiste Goujaud from CMAP, Ecole Polytechnique, defends his thesis on the complexity analyses of gradient methods for convex optimization. Explore Baptiste’s work here.

Bio: Baptiste Goujaud is a Ph.D. student in Optimization at Ecole Polytechnique under the supervision of Aymeric Dieuleveut and Adrien Taylor.

Before that, he completed an M.Sc. in Machine Learning and Computer Vision at the ENS Paris-Saclay.

Title: Thesis defense: Constructive approaches to worst-case complexity analyses of gradient methods for convex optimization: contributions, new insights, and novel results.

Abstract: In the current era marked by an unprecedented surge in available data and computational prowess, the field of machine learning, and more specifically deep learning, has witnessed an extraordinary evolution.

Machine learning algorithms heavily rely on optimization techniques to tune their parameters and enhance predictive accuracy.

Amidst the myriad of optimization approaches, the first-order optimization methods have emerged as stalwarts, demonstrating a remarkable balance between efficacy and computational efficiency.

Crucially, the development of strong optimization theory is pivotal in unraveling the full potential of first-order optimization.

Theoretical underpinnings not only deepen our understanding of optimization landscapes but also pave the way for the design of novel algorithms.

The momentum-based algorithms have proven their mettle by significantly accelerating convergence in optimization problems.

The conceptual foundation provided by optimization theory has enabled the formulation of momentum, turning theoretical insights into a powerful and widely adopted practical optimization tool.

The role of this thesis is to pursue and accelerate the effort to develop a strong theoretical foundation of first-order optimization.

We proved various results, exploiting the general structures of the certificate proofs.

(i) We used the link between quadratic optimization and polynomial theory to explain empirically observed phenomena.

(ii) We implemented a Python package to support the performance estimation framework.

(iii) We wrote a tutorial to explain how to derive natural proofs in optimization based on this framework.

(iv) We applied this methodology, with the help of our Python package, to derive a complete first-order optimization theory on a very large class of functions.

(v) We complemented the theoretical performance estimation framework to disprove the convergence of a specific family of methods and applied it to the famous Heavy-ball method to provably disprove an acceleration over the class of smooth and strongly convex functions.

 

We look forward to welcoming you to an afternoon filled with insightful talks and stimulating discussions.