Research

Is AI All Hype, or Is It Changing Our World for Real?

With Artificial Intelligence (AI) advancing rapidly, it’s easy to get lost in the hype surrounding breakthroughs like AlphaFold or be concerned about failures like COMPAS. In this article, we explore the nuanced relationship between AI, ethics, and regulation, drawing on recent research and offering insights into the broader socio-technical implications of AI. This journey follows the theme of “charting” AI to navigate its impact on society, the environment, and regulatory frameworks.

By Tiphaine Viard – Associate Professor Digital, Organization and Society, Operational AI Ethics at Télécom Paris.
Tiphaine Viard

Tiphaine Viard at the Hi! PARIS Meet Up on AI, Ethics & Regulations – October 17, 2024 at Schneider Electric

First, why map the AI landscape?

As AI advances, ethical and regulatory questions become more pressing. However, these concerns do not always evolve at the same pace as the technology. AI, far from being a standalone innovation, exists within a broader socio-technical system, deeply embedded in societal values and practices. To understand how AI will shape the future, it’s crucial to chart its path across different contexts technical, ethical, and regulatory.

“AI is not just a technical object; it’s also a social object.”

When we chart the landscape of AI, we look beyond flashy technological advancements and failures. While AlphaFold earned global acclaim for its protein-folding breakthroughs, and AI tools like spam filters work quietly behind the scenes, failures like COMPAS raise alarm over biased decision-making in predictive justice. These examples highlight the dual nature of AI: its vast potential and its inherent risks.

What is AI and how does it relate to ethics and regulation?

Defining AI is not straightforward. As debates in institutions like the European Union continue, the boundaries of AI remain fluid. AI serves as both a powerful technological tool and a prism that often magnifies societal biases and inequalities. It’s vital to consider both the successes and the failures of AI systems to fully grasp their ethical implications.

From a regulatory perspective, AI raises questions such as:

  • When is it appropriate to deploy deep learning?
  • How should we regulate high-risk AI systems?
  • What are the long-term risks of losing control over AI?

These are not just technical questions; they are moral and legal dilemmas that challenge our understanding of how AI interacts with society.

Mapping AI's socio-technical system

To answer these questions, we must map AI as a socio-technical system. This approach emphasizes that AI does not exist in a vacuum, it is intricately linked to social, environmental, and economic factors. Tiphaine remarks, “AI doesn’t exist in a vacuum, it’s deeply embedded in society.” This highlights the need for a comprehensive approach to analyzing AI, one that integrates social, technical, and environmental considerations.

One of the key insights is that AI impacts not just through the technology itself but through the societal reception and use of that technology. Unexpected uses, like “Twitch Plays Pokémon” where players collectively controlled the game through chat illustrate how AI often evolves in ways that were never anticipated by its developers.

When we chart AI, we begin with an understanding that AI is as much a social object as it is a technical one. As AI continues to develop, policymakers and citizens alike must stay informed about its broad-reaching effects.

Insights from research: AI, ethics, and media

A study of over 1,000 French newspaper articles reveals how AI and ethics are framed in public discourse. Regulation and governance are frequently clustered together, while advancements in machine learning and deep learning dominate another set of discussions. In contrast, AI’s impact on bioethics forms a separate cluster, showcasing the diverse ways AI is approached depending on the context.

These findings highlight how AI ethics spans multiple disciplines, including computer science, economics, law, and sociology. Ethical AI is not confined to technical discussions; it extends to regulatory policies and business innovations. Tiphaine reminds us that “ethical issues surrounding AI don’t necessarily evolve as quickly as the technology itself.” This gap between technological innovation and ethical oversight is a critical aspect to consider as AI continues to advance.

Furthermore, AI themes vary in prominence depending on their context. While fairness in AI is now widely discussed across sectors, issues like Artificial General Intelligence (AGI) remain speculative, with limited engagement from technical or regulatory viewpoints. This variation underscores the importance of mapping AI to identify where the most pressing ethical challenges lie.

AI’s impact on labor and the environment

AI’s influence extends beyond ethical and regulatory concerns to its tangible effects on labor and the environment. The environmental footprint of AI systems and the human labor required to annotate and train AI models often go under-discussed. As AI systems demand vast computational resources, the resulting environmental impact must be considered, especially as the world seeks sustainable technology solutions.

At the same time, the labor involved in AI, particularly in training datasets, is frequently overlooked. Charting these aspects of AI is crucial to ensuring that the technology’s benefits are not overshadowed by its hidden costs. Tiphaine adds, “AI is not creating these biases by itself, but it often acts as a prism, making them more visible or magnifying them.” This reflection on AI’s potential to amplify societal issues further illustrates the importance of ethical oversight and regulatory frameworks.

Charting AI allows us to take a comprehensive view of the ethical, social, and regulatory challenges the technology presents. By understanding these complex interactions, we can better plan for the future and ensure that AI serves the broader public good, rather than exacerbating inequalities or causing unintended harm.

As AI continues to evolve rapidly, we must remain vigilant in assessing its impact across various sectors. Mapping AI not only helps navigate the challenges of today but also prepares us for the ethical dilemmas and regulatory questions of tomorrow.