We are delighted to present the project of Arnak Dalalyan, Hi! PARIS Fellow 2021.
Arnak Dalalyan is professor at ENSAE Paris and director of the CREST (Center for Research in Economics and Statistics, a joint research unit of CNRS, ENSAE Paris – Institut Polytechnique de Paris, École polytechnique – Institut Polytechnique de Paris, GENES)
. His research revolves around statistical methods for machine learning. He is going to further develop those methods in his project with Hi! PARIS entitled “Statistical Analysis of Generative Models: Sampling Guarantees and Robustness (SAGMOS)”.
Toward a better understanding of AI algorithms
How can we better understand machine learning algorithms that are omnipresent today in artificial intelligence technologies? As a professor at ENSAE Paris and director of the Center for Research in Economics and Statistics (CREST, a joint research unit of CNRS, ENSAE Paris – Institut Polytechnique de Paris, École polytechnique – Institut Polytechnique de Paris, GENES), Arnak Dalalyan is tackling this question with the help of mathematics. “When I was a student, I was attracted by pure mathematics, such as algebraic geometry”, he remembers, “but I also wanted to deal with society related topics”. That is why he turned to statistics, which describe the methods used to work with numbers and data, the latter being key ingredients of artificial intelligence today.
“My definition of artificial intelligence is rather universal. It means building machines able to take decisions. There are several ways to do so”, Arnak Dalalyan explains. “Today, we do not feed the AI with symbolic rules anymore but with data, and the machine itself deduces what the rules are”. This datacentric AI has become predominant. We have all heard terms like “machine learning” or “neural network”. It has also proven very successful at concrete tasks such as playing Go or analysing images.
AI algorithms become more and more complex and neural networks require an increasing number of tuning parameters to handle new applications. These algorithms are tested before being widely used, but a lot of research is still needed to understand their properties and guarantee that they will work in specific cases. As a researcher, Arnak Dalalyan cares about elegance and simplicity. “Complexity is not always necessary. A simpler algorithm is more robust and more versatile than a complex one.” It also needs less computer time to train, hence reducing power consumption, which is not negligible.
In order to better understand machine learning algorithms, Arnak Dalalyan’s project financed by Hi! PARIS will study their properties from several perspectives. One of them is statistical complexity, which quantifies the amount of data needed to train an algorithm in order for its predictions to achieve a certain level of precision, with less than 5% of errors for instance.
Knowing this may help facilitate the training of machine learning algorithms, which need to be fed with a bewildering number of examples before providing satisfactory outputs. Movie recommendation algorithms for instance need to gather a huge amount of information about the user (e.g. based on its browser history) in order to determine his or her characteristics. The user profile can then be seen as a point in a mathematical space with a large number of dimensions, each corresponding to a characteristic. This large number of dimensions makes the problem harder to grasp. Moreover, the neural networks performing the algorithm are described by model parameters that are themselves high-dimensional. This double “curse of dimensionality”, as mathematicians call it, can lead to an exponential increase of the problem’s complexity. “We are working with mathematical tools to help us find subspaces within this large dimensional space that can be more easily handled while comprising the essential information” assures Arnak Dalalyan.
The theoretical work of his team is also supported by numerical simulations but is not aimed at developing new algorithms for new applications. However, experience tells that better theoretical understanding of an algorithm often leads to small but significant improvements. “Many research teams in this field are progressing very quickly worldwide. Despite the competition, there are also synergies and the different results complement each other. In any case, there are still many discoveries to be made.”