Although it may seem a recent invention, Artificial Intelligence (AI) has been with us for decades, through our smartphones, in the algorithms behind social networks, predictive texts, voice assistants, and GPS navigation systems. We live with it every day, but it seems that we had not noticed its power until now, when the speed of its evolution has shown us that it is capable of performing functions that we considered exclusively human, facilitating some tasks, but not without sounding the alarm about its inherent risks.

And the fact is that the use of AI will only increase because it makes our lives easier and its benefits are seemingly infinite. Many companies use it because it allows them to analyze data, have virtual assistants for customer service, manipulate algorithms to enhance their brand and support human teams by automating various tasks. But, at the same time, there are questions about whether it can eliminate jobs, what are its hidden biases and, perhaps most difficult to discuss, what are the unpredictable dangers of its implementation.

We see this with the rise of ChatGPT and its many functions, which, with its new version GPT-4, can perform human tasks with greater precision and answer more complex questions. However, this innovation is not free of questions, since it may deliver wrong data, with biases, give way to identity theft, expose our data, take away intellectual property, or spread fake news.

This last point is worrying, because if we do not take action, it will inevitably lead us to doubt even more the reported truth, during the already deep crisis of trust that the world is going through. Naturally, any potentially malicious interference in the collective perception of reality seriously undermines our democracy. The effect is made even worse with deep fake technology (video, image, or audio that falsifies a person’s appearance) or with tools to generate images with AI, as MidJourney does with an incredible level of detail. Just a few days ago, “photographs” of former U.S. President Donald Trump being arrested by the police were circulating. How many people thought these images were real, if only for a moment? How will we be able to distinguish reality from lies? What are the limits of AI? Questions are still unanswered.

The accelerated advance of AI is raising several alarms. Recently, a group of more than a thousand technology experts, including entrepreneur Elon Musk and Apple co-founder Steve Wozniak, published a letter requesting a pause for at least six months on AI advances because it is becoming too intelligent for existing controls, and security protocols are urgently needed. Italy even banned ChatGPT, joining China, Iran, North Korea, and Russia, because this tool would not respect the consumer data protection law, collecting private information, in addition to denouncing the absence of a legal basis.

Rather than refusing to accept this reality, we must explore it to be aware of the risks and warn of them to prevent them. Above all, it is necessary to apply ethics, to make decisions based on shared values and with the common good in mind.

Companies must participate in this discussion because for them this technology will bring innumerable benefits, but it will also oblige them to establish the ethical limits of its use. It will be imperative that they encourage critical thinking and prove themselves capable of providing more value than AI alone because professionals capable of reasoning and interpreting better than AI will be required if we want to use it as a tool to improve our lives, instead of letting AI – or its controllers – manipulate us.

A dramatic case was the suicide of a Belgian man who had intensive conversations with a chatbot named Eliza, expressing his concern about the climate crisis and the future of the planet, who suggested to the AI the idea of “sacrificing itself” if it agreed to “take care of the planet and save humanity”, which was not contradicted by Eliza. Another example was given by the American mental health company Koko, which implemented a chatbot to attend to patients virtually, but had to eliminate it when patients found out that the messages they received were created by a machine. The company assumed that “simulated empathy is strange”. Undoubtedly, in the field of mental health, the alerts should be greater, because there are even more gray areas.

Along the same lines, it is worth mentioning the problem of the ethics of the algorithms that direct us as we surf the Internet. Although they are not bad in themselves, they often use psychological tricks to manipulate us. That became clear with the Cambridge Analytica scandal, a company that, through a psychological model and a highly accurate algorithm, analyzed the profiles of millions of Facebook users and tried to influence their voting decisions, demonstrating that algorithms can cross red lines.

A legal framework for AI is urgently needed because its use is universal, and today it is increasingly indispensable. But even more, we require an ethical framework pushed by leaders, pending regulations – which will surely take several years at the same time as new technologies emerge – where there is a transversal commitment, to which both the creators of AI and its users are subjected. If we do not visualize its potential risks now, it may be too late.

By Susana Sierra
Published in La Tercera