Are you afraid of losing your job because of artificial intelligence? The question may sound exaggerated, but it reflects a growing concern across sectors as this technology continues to advance, increasingly replacing human tasks in various industries and stirring unease.

A recent example is Tilly Norwood, an “actress” generated by artificial intelligence who has attracted the attention of talent agencies. Although she has not yet appeared in any production, her presence on social media as an influencer has already sparked controversy in Hollywood. A similar case involved Jianwei Xun, the supposed visionary author of the acclaimed essay Hypnocracy, cited in conferences and praised by European intellectuals — who turned out to be a fictional creation by Italian philosopher Andrea Colamedici, developed with AI systems as part of an experiment to “raise awareness.”

These cases, which not long ago seemed like science fiction, are now reality — showing how artificial intelligence can produce content that is believable yet entirely fictional, and even take on human form. And this goes far beyond the creative or artistic worlds. AI is transforming how we work, make decisions, and interact within organizations. It automates tasks, redefines roles, and in some cases, replaces functions altogether.

Therefore, as organizations adapt to this new reality, they also bear the responsibility to support people through the transition. Because transformation is not only technological — it is human. To ensure sustainability, it must be driven by clear principles regarding how AI is implemented, integrated, and communicated.

Artificial intelligence is, arguably, the industrial revolution of our time. And as such, it will bring profound change. But the challenge is not just to move forward — it is to do so ethically, purposefully, and with people at the center. Let´s not forget that companies are made up of people, and they must take an active, conscious role in facing this disruption. AI can optimize processes, but it cannot replace judgment, critical thinking, or human verification. Therefore, rather than displacing workers, it should empower and prepare them. It is not enough to focus adoption on business models; we must also focus on those who make them possible.

While younger generations arrive with more ingrained digital skills, much of today’s workforce has suddenly encountered the rise of AI and faces the challenge of adapting quickly to avoid being left behind. Companies have a duty to support this transition by providing training, tools, and opportunities to update skills. Because implementing AI responsibly is not just about adopting technology — it’s about taking care of the people who sustain it.

According to McKinsey’s State of AI (2025), 78 percent of organizations already use artificial intelligence in at least one business function. Meanwhile, the World Economic Forum’s Future of Jobs Report 2025 reveals that 63 percent of employers view skills gaps as the greatest barrier to transformation. In response, 85 percent plan to prioritize workforce reskilling, 70 percent expect to hire new talent, 40 percent foresee reducing headcount as certain skills become less relevant, and 50 percent plan to reassign staff. Clearly, the future of work demands strategic action today.

That is why AI deployment cannot be left to chance — or solely in the hands of technical teams. It requires leadership, vision, and commitment from senior management. Hence, AI governance must be an urgent priority. Innovating without a solid structure can lead to serious consequences and to decisions disconnected from a company’s purpose, values, and people. Governance means developing clear internal policies based on guiding principles that shape AI’s development, use, and continuous oversight. It also requires assigning accountability, ensuring process traceability, protecting personal data, and fostering an organizational culture that embraces digital ethics and values skill development as part of the transformation.

Implementing AI responsibly also means building teams capable of questioning, understanding its scope, and making informed decisions about its use.

If companies manage to integrate AI with a clear vision, invest in their people, and operate under well-defined principles, this technology can become a true competitive advantage. But if they ignore its cultural and human impact — and the risks that come with it — they risk eroding trust, losing corporate identity, and widening gaps. Because ultimately, no matter how sophisticated it becomes or how much power it gains, AI does not embody values. Responsibility, at the end of the day, will never be artificial — it will always be human.

By Susana Sierra
Published in La Tercera