Highlight Research Society
Bannières - Highlights News (Site Web) (Gabriele Mazzini)

The EU AI Act: Where it landed and where it might go

The EU Artificial Intelligence Act (AI Act) is the world’s most comprehensive attempt to regulate artificial intelligence, but as Gabriele Mazzini one of its original drafters reminded the audience at the Hi! PARIS Meet Up on the AI Act, it’s also a work in progress. In his talk, Mazzini walked through the logic of the Act, its risk-based foundation, and how recent events have transformed its scope. His reflections offered both a behind-the-scenes view and a forward-looking critique.

Why the AI Act focuses on what you do, not what you build

The original draft of the AI Act, developed by the European Commission in 2021, was guided by a clear principle: regulate not the technology, but its applications. The idea was to focus on how AI is used, not what it is. As Mazzini explained, this meant identifying risk levels and aligning them with regulatory obligations.

Three categories of risk were defined:

  1. Prohibited AI applications, such as social scoring or exploitative and manipulative AI. One of the most controversial proposals involved restricting remote biometric identification systems in public spaces.
  2. High-risk systems, the heart of the regulation, accounting for around 98% of the legal provisions. These systems would be subject to compliance, certification, and CE marking, similar to how medical devices are regulated.
  3. Transparency obligations  for systems like chatbots, where the law requires users to be informed when interacting with an AI. According to Mazzini, this is not just a technical issue, it’s about human dignity and respecting the way people relate to machines.

Gabriele Mazzini, Architect of the EU AI Act and Research Affiliate at MIT Media Lab | Hi! PARIS Meet Up on the AI Act at VINCI (March 2025)

The turning point: ChatGPT and the U.S. influence

The final version of the Act preserved the risk-based structure, but it was significantly influenced by two external events: The launch of ChatGPT in October 2022, and the Executive Order on AI from the Biden administration in October 2023, which introduced rules for dual-use foundation models.

Together, these developments pushed EU legislators to expand the scope of the AI Act to cover not just applications, but AI tools themselves, especially general-purpose AI models, also known as foundation models.

This new chapter introduced two rule sets:

  1. Transparency for all models, including documentation requirements and obligations to share information downstream, particularly regarding copyright compliance.
  2. Additional obligations for models with systemic risk, including risk assessment, incident reporting, and cybersecurity measures.

To determine systemic risk, regulators proposed two criteria: The computational power used in training (mirroring U.S. thresholds like 10^26 FLOPs). And the designation by the AI Office, part of the European Commission, following an approach similar to the Digital Services Act.

These rules apply even to open-source foundation models, though some exceptions are allowed.

The AI Act isn’t what it started as

Since the first draft, the AI Act has changed quite a bit, not just in terms of content, but also in overall complexity. Mazzini pointed out that the number of prohibited use cases has gone from four to eight, with new ones like emotion recognition and categorization, which he described as “vague” and “too broad.” The list of high-risk applications hasn’t exploded, but it’s expanded enough to make compliance more demanding. One big shift is that the regulation doesn’t just focus on applications anymore, it now also covers the AI models and tools themselves.

When it comes to general-purpose AI, a lot of the specifics are still being worked out through voluntary codes of practice. That’s led to some debate, especially after a recent letter from EU lawmakers raised concerns about whether those codes are enough to keep up with fast-evolving risks. Meanwhile, governance structures have gotten more complex, both at the EU level and within member states, partly because of the broader scope that now includes general-purpose models.

Another important point is the new set of obligations for companies. What used to be called “users” are now “deployers,” and they have more responsibilities, like doing fundamental rights impact assessments and sharing more information. Lastly, Mazzini mentioned that the overlap between the AI Act and other EU laws still isn’t totally clear, and how these different legal frameworks will work together is still being figured out.

Hi! PARIS Meet Up on the AI Act at VINCI (March 2025)

Less would have been more: Mazzini’s assessment

In closing, Mazzini offered a candid reflection: “Less would have been more.” He acknowledged the ambition of the AI Act but emphasized the importance of legal clarity, both for operators who need to comply and regulators who must enforce it.

What should come next?

  1. Clarity and legal certainty, businesses must understand what they’re required to do, and enforcement must be consistent across EU member states.
  2. Sensible interpretations, regulators and courts should aim for realistic, state-of-the-art, and innovation-friendly readings of the law.
  3. Harmonized standards, especially for SMEs that lack resources to develop compliance mechanisms on their own.
  4. Use-case-based advocacy, companies should engage proactively, using their real-world cases to shape practical interpretations.
  5. Impact monitoring, we need data on how the AI Act is working. Is it increasing trust? Creating confusion? Encouraging innovatio or stifling it?

“We are dealing with a fast-moving technology,” Mazzini said. “The law matters but so does how we interpret and implement it.”

In his view, transparency, pragmatism, and responsiveness will be key to ensuring that the AI Act delivers on its promise without hindering Europe’s AI ecosystem.