In March 2024, the U.S. Securities and Exchange Commission (SEC) sanctioned Delphia (USA) Inc. and Global Predictions Inc. for promoting artificial intelligence capabilities that didn’t exist. One claimed to personalize investments using AI, yet lacked the technology. The other presented itself as the first AI-regulated financial advisor without delivering on that promise. This phenomenon already has a name: AI washing—when a company pretends to use AI or exaggerates its capabilities to appear innovative, similar to greenwashing in environmental matters.

Cases like these not only damage market trust but also highlight something we already know but must emphasize: artificial intelligence is no longer science fiction or in an experimental phase. AI is part of our daily lives—it influences our decisions, operations, homes, and workplaces. That’s why it requires responsibility, strategic vision, and a solid governance framework to ensure its ethical and trustworthy use.

Companies have recognized AI’s transformational potential to drive productivity, efficiency, and innovation. It has moved from being a purely technical topic handled only by experts to being a strategic component of organizations. However, as AI becomes embedded in the core of business operations, challenges multiply, demanding a robust governance framework to guide its responsible deployment, manage risks, and ensure that its impact is positive, ethical, and sustainable.

According to the 2025 Artificial Intelligence Index by Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), corporate use of AI accelerated from 55% in 2023 to 78% in 2024. Meanwhile, Gartner’s “AI Governance Frameworks for Responsible AI” study found that 55% of organizations reported not having any AI governance framework in place. Similarly, McKinsey’s 2024 Global AI Survey showed that only 18% of respondents said their organizations had a board with authority to make decisions involving responsible AI governance.

The contrast is stark and concerning. While AI usage increases, its governance is lagging behind. This imbalance is not minor; it reveals that many companies are racing ahead with a powerful technology without the steering wheel to guide or control it. In a world where innovation moves fast, operating without strong governance is not just risky—it’s a strategic vulnerability that can come at a high cost.

But what is AI governance? It refers to the set of policies, processes, and people responsible for ensuring that AI is developed, used, and implemented ethically, transparently, and aligned with a company’s values and corporate purpose. Governance must be integrated into business strategy, guiding technological decisions with a long-term view, anticipating and mitigating risks—whether internal or from third parties—and generating trust among stakeholders.

This presents a new challenge for boards of directors, who must lead AI governance in an active and well-structured manner to ensure it becomes a competitive advantage rather than a liability.

The risks AI poses to companies are wide-ranging—from intellectual property violations, algorithmic bias, misinformation, and inaccuracies, to lack of transparency, data privacy concerns, model hallucinations, growing cybersecurity threats, and deepfakes, among others. All this unfolds in a context where public trust and regulation aren’t advancing as quickly as AI adoption, and where doubts remain about whether companies are protecting data privacy or using it ethically.

It’s also important to note that artificial intelligence is not an obligation—it is a strategic decision. Companies must place this issue on the table and determine whether they will adopt AI or not, justifying their choice based on business objectives. Because AI is not just a tool to showcase innovation and modernity—its implementation requires judgment, commitment, and a well-defined purpose.

Therefore, it is imperative for companies to act with urgency to establish governance structures that evolve alongside the growth of AI, turning this challenge into a competitive edge. Those who act responsibly today will not only avoid and manage risks—they will gain leadership, trust, and long-term sustainability. The question is no longer whether AI should be governed—but how we are doing it.

By Susana Sierra
Originally published in El Mostrador