Research

Three Challenges AI Corporate Policies Should Address for Trustworthy AI

As generative AI (GenAI) progresses further, the challenge of extracting real business value from these innovations becomes more apparent. Despite the AI boom, organizations still struggle to generate measurable returns. In this article, we explore three key challenges businesses must address to harness AI’s full potential and balance value creation with ethical and regulatory concerns.

By David Restrepo Amariles – Associate Professor of Artificial Intelligence and Law, Worldline Chair on the Future of Money at HEC Paris and Hi! PARIS Fellow.
David Restrepo
AI, Ethics & Regulations

David Restrepo Amariles at the Hi! PARIS Meet Up on AI, Ethics & Regulations – October 17, 2024 at Schneider Electric

First, why aren't we seeing more value from GenAI?

The rapid adoption of AI tools like GPT-4 and other large language models (LLMs) has transformed the technological landscape, but this expansion has not always translated into clear business value. To illustrate this paradox, consider the question: When will generative AI have a significant impact on business? In recent discussions, opinions ranged from “happening now” to estimates of several years in the future.

McKinsey’s evolving predictions about AI’s impact reveal a fundamental issue: many organizations, including research leaders, fail to grasp the full trajectory of AI. In 2017, McKinsey predicted that natural language understanding would reach human-level performance by 2055. Yet by 2023, the same firm revised their prediction to claim the milestone had already been achieved. But is this really an accurate forecast, or just a reflection of the speed of change in AI? This rapid evolution raises important questions about the readiness of businesses to adapt.

Three hypotheses: Why isn't AI delivering on its value promise?

To address this paradox, David proposes three main hypotheses:

1- A Lack of Understanding of AI’s Trajectory

Many companies and researchers underestimated the speed at which AI would evolve. Collaborations between academia and industry have significantly accelerated the pace of AI advancements, such as the introduction of transformers in 2017. Businesses need to align their understanding of AI’s capabilities with these rapid changes to stay competitive.

2- Decreased Time to Market

There is a growing trend of releasing AI tools before they are fully refined, creating a “work-in-progress” mentality within businesses. “AI tools are being launched unfinished,” allowing companies to gain early insights but also creating challenges in refining and improving these tools in real time.

3- A New Division of Labor

Previously, organizations developed custom AI solutions internally. Today, however, companies like OpenAI provide pre-built models, and businesses must focus on leveraging these tools to create value. “Even the creators of these models don’t fully understand their capabilities,” making the process of adopting AI even more complex.

But how can businesses stay ahead of AI advancements?

As AI capabilities continue to surpass performance benchmarks, the question remains: how can organizations capitalize on these advancements? “We’re at the peak of inflated expectations for generative AI,” as seen in the 2024 hype cycle. The choice businesses face now is whether to wait or take action. Those that act now and integrate AI into their operations are more likely to benefit when AI reaches its “plateau of productivity.”

New labor distribution in AI

The shift from developing AI models internally to relying on external providers has created new challenges for businesses. Large companies like OpenAI produce advanced AI models, while businesses must find ways to integrate these systems into their workflows. This new labor distribution introduces uncertainties and risks, such as working with unfinished AI products while trying to capture value. “Organizations need to balance the potential benefits of generative AI with the risks it presents,” including intellectual property concerns and the challenge of dealing with inaccurate or misleading AI outputs.

Value creation and risks in generative AI

While generative AI holds enormous potential, it also introduces significant risks. “Generative AI systems like GPT can hallucinate,” producing incorrect or misleading information. Businesses must learn to work around these flaws, treating them as part of the tool’s design rather than bugs. At the same time, organizations must address additional risks, such as intellectual property issues and data privacy concerns.

Aligning individual and corporate productivity gains becomes essential here. Many employees are using GenAI tools like GPT without disclosing it, creating a misalignment between individual productivity boosts and corporate-level risk management. “The paradox is that individuals capture the benefits of AI, while companies bear the risks of ungoverned GenAI use.” Addressing this misalignment through corporate policies is critical.

One major challenge is the phenomenon of “shadow adoption” the unofficial use of AI tools by employees without their managers’ knowledge. In a recent study, we found that junior analysts in a large consulting firm were using GPT to solve business problems, often without revealing this to their superiors. This “shadow adoption” creates a disconnect between individual benefits and team-level value creation.

The Manager's Dilemma: Hidden AI Usage

Managers face another dilemma: they may prefer outputs generated with AI, but they are often unaware when AI tools have been used. “Managers were unable to identify AI-generated content unless it was disclosed,” which creates an issue of information asymmetry. This brings us to the second key point:

Rethinking Incentive Structures is necessary to address the agency problems introduced by AI. Information asymmetry, where employees benefit from AI use without disclosure, presents a moral hazard. Organizations must rethink their incentive structures to ensure collaboration is not hindered by AI adoption.

Generative AI presents a paradox: while individuals are capturing value from these tools, teams and organizations are not reaping the same benefits, and businesses are bearing significant risks. To address this, corporate AI policies must be updated to ensure that the benefits of AI are distributed across teams, and that the risks are effectively managed.

Guidelines for AI Evaluation should also be established. AI policies need to offer clear, multi-dimensional guidelines that balance technical, business, legal, social, and environmental metrics. “Making these trade-offs explicit is critical for effective AI governance and compliance.” By addressing these three challenges understanding AI’s trajectory, managing unfinished products, and navigating the new division of labor companies can create a solid foundation for trustworthy AI that benefits the entire organization.