Research

No Free Lunch: What statistics can teach us about the limits of knowledge

By Xiao-Li Meng (Harvard University), Hi! PARIS International Invited Professor

Mathematics and physics have long shared a quiet truth: every gain in precision comes at a cost. Professor Xiao-Li Meng reminded us, during the the first session of the Hi! PARIS AI Seminar Cycle that whether you are a statistician, a physicist, or a policymaker, there is, quite literally, no free lunch.

The mathematics of limits

Imagine trying to measure both the position and speed of a moving particle. Physics tells us that the closer we get to one, the blurrier the other becomes. This is Heisenberg’s uncertainty principle, a cornerstone of quantum mechanics. Now shift from atoms to algorithms. In statistics, a similar principle governs our attempts to estimate the truth. We can measure something, a mean, a correlation, a trend, but we can never do so with perfect certainty. Every estimate carries an error, and even that error can only be known approximately.

The professor described it as “the greatest statistical magic”: the ability to not only estimate something but also quantify how uncertain that estimate is, all from the same data. It sounds miraculous, until you realize it relies on trade-offs hidden deep inside the mathematics.

No free lunch in science, or life

In the language of statistics, these trade-offs are formalized by the Cramér–Rao bound, which sets a theoretical limit on how accurate any unbiased estimate can be. No matter how clever our algorithms become, there is a boundary we cannot cross.

Machine learning faces a similar constraint. When data scientists use cross-validation to test models, the errors they compute are often independent of the “real” errors that occur once the model meets the world. In other words, if you use all your data to build the perfect model, you have none left to test how good it truly is.

The lesson echoes far beyond mathematics. In business, in science, in governance, we face the same dilemma: we cannot optimize everything at once. As the speaker joked to the room of vice presidents, “You can’t have the best of everything, at the lowest price.”

When physics meets statistics

The connection between Heisenberg’s uncertainty principle and statistical inference is more than poetic. Mathematically, both stem from the same inequality: the Cauchy–Schwarz inequality, which describes how two quantities can never be simultaneously minimized beyond a fundamental limit.

In quantum mechanics, this expresses itself as the impossibility of pinning down both a particle’s position and momentum. In statistics, it shows up when trying to minimize both estimation error and uncertainty on the same dataset. In machine learning, it reappears as the impossibility of achieving perfect prediction and perfect generalization at once.

Different fields, same law of nature.

The geometry of knowledge

Behind the equations lies a geometric truth. Picture two arrows on a plane, one representing what we want to estimate, the other representing our estimation error. If the arrows overlap, we can adjust one to shorten the other, improving accuracy. But if they stand perfectly perpendicular, orthogonal, then improvement is impossible.

This geometry of knowledge explains why we must separate training data from test data, why science requires replication, and why even the most advanced AI systems face built-in limits.

Every act of learning, whether human or artificial, carries an irreducible uncertainty.

Xiao Li Meng
Xiao Li Meng Session

Xiao-Li Meng (Harvard University) delivering an AI Seminar Cycle session

Beyond the equation

What begins as a mathematical talk soon becomes philosophical. The professor recalled his early years as a pure mathematician in China, immersed in abstract algebra and far removed from the messy realities of physics. Now, as a statistician, he sees beauty in imperfection. “In life, just like in statistics,” he said, “you can’t have everything. Every gain demands a compromise.”

Even the universe seems to agree. From the smallest particles to the largest data models, precision and uncertainty dance in constant tension.

The ethics of uncertainty

This idea extends beyond laboratories. In an age of AI, data, and prediction, we often mistake precision for truth. Yet the mathematics suggests humility. Knowing the limits of what we can know may be the most intelligent act of all. Whether in quantum physics or corporate decision-making, the same question remains: how much certainty can we afford, and at what cost?

Because in science, as in life, there is always a price for clarity.