This article first appeared in La Tercera on August 24, 2021.
We are immersed in the digital era, in the midst of the maelstrom of technology and its constant evolution. What is a revolutionary innovation, becomes an everyday occurrence, and later obsolescence. And so we live, in an eternal loop where we acquire the latest innovation that, more and more frequently, we replace with the next one.
Information and communication technologies (ICTs) have not only made our daily lives easier but are also an important driver of economic and social development. However, they are advancing faster than the laws that regulate them, leaving an ethical void that must be addressed.
Clearly, ICTs in themselves are not good or bad, and their purpose is to satisfy various needs (even those we did not know we had). Thanks to them, processes are automated, we are globally interconnected and we can order food or a cab without the need to move or wait on a street corner. The questioning of technology lies in how it is used and how the risks it entails are addressed.
As is often said, “data is the oil of the 21st century”, and the value of this intangible is increasing rapidly. Therein lies the urgency of establishing a digital ethic that safeguards the common good, and addresses today’s major concerns, such as personal data leaks, misuse of user information, privacy violations, biases of its creators, among others. All this leads to loss of trust, which harms companies, individuals, and the ecosystem in general.
And that is when digital ethics play a transcendental role, where digitally responsible companies strive to do the right thing, as they understand that the economic benefits and efficiency of their products must go hand in hand with social welfare. This implies that companies define their ethical frameworks, assess potential risks, with a focus on integrity, own values, and transparency.
In this context, trust must be the central and most effective resource to overcome the current skepticism towards new technologies and thus counteract the bad practices of a few companies.
One of the greatest risks to which technology exposes us is our personal data since it is sensitive information that we provide to adapt to the new times and access better benefits, but its misuse is increasingly common.
An example of this was the use of data of millions of Facebook users by Cambridge Analytica, which beyond a leak for commercial purposes, was the use of this information to manipulate users through political ads, which resulted in the triumph of Trump or Brexit. After this scandal, the impunity with which large technology companies operate and their growing power has been questioned.
In this matter, in Chile, we find ourselves with an outdated personal data protection law and a project that seeks to modernize it trapped in Congress. While this is happening, users are risking their privacy and even being evaluated as to whether they can access credit, car, or health insurance, since there is access to their entire life. And this makes particular sense with the pandemic, as people are more willing to give up their personal data in exchange for health or freedom.
Another ethical risk is algorithmic biases, i.e., those that reflect the values of the people who develop the technology and therefore lead to discrimination. This is what happened to Amazon, which despite its experience in Artificial Intelligence (AI), had to discard software to automate the selection of personnel because it discriminated against women. This was because, through AI, the algorithms made decisions based on historical data, which usually perpetuates existing biases. In this case, the bias was related to the fact that men have predominated in the technological world.
Most of the Tics are born for a good purpose, as for example, the Pegasus software, which was born with the aim of pursuing criminals and terrorists, penetrating mobile devices to access all the information, which ended up being misused by various governments that monitored about 50,000 people qualified as “of interest”.
Digital ethics should seek equity, inclusion and be responsible in their decisions because if not controlled, Tics and AI can promote misinformation, exacerbate polarization, create addiction, amplify prejudices and inequalities, so automating decisions requires human judgment. There, companies must assume their role in digital ethics, and not wait for laws to act, as systematic changes are required to move the needle of ethical behavior in decision-making environments.
By Susana Sierra