In September 2024, the U.S. Department of Justice (DOJ) published a new update to its Evaluation of Corporate Compliance Programs. These modifications respond to the need to adapt to the current context while reflecting the DOJ’s ability to evolve its approach along with the changes and challenges that arise in business practice.

In the previous update, in April 2023, one of the most significant changes was a new section on compensation structures, which guides prosecutors in assessing whether companies implement compensation systems incentivizing corporate compliance that are consistent with the company’s values and policies. Also introduced during that period was the Criminal Division’s Pilot Program Regarding Compensation Incentives and Clawbacks, seeking to promote the recovery of ill-gotten compensation.

The 2024 version incorporates important changes that respond to current trends, focusing on the urgency for companies to properly manage emerging risks associated with the use of advanced, disruptive technologies, especially artificial intelligence (AI). The new guidelines emphasize the importance of identifying and managing the impact that these technologies may have on companies’ ability to comply with laws and promote responsible and ethical use of these tools, without losing sight of the design and overall effectiveness of compliance programs.

This update is in line with Deputy Attorney General of the United States Lisa Monaco’s keynote at the American Bar Association’s 39th Annual White Collar Crime Institute, in March 2024. There, she discussed the DOJ’s active strategy to pursue corporate crime, including new threats derived from the malicious use of technology, particularly AI, which—despite its undeniable benefits—is being used to enhance illegal activities, including such infractions. “To be clear: Fraud using AI is still fraud. Price fixing using AI is still price fixing. And manipulating markets using AI is still market manipulation. You get the picture,” asserted Monaco.

These guidelines encourage us to incorporate AI-derived risks as an integral part of compliance programs. In doing so, companies can implement appropriate measures to prevent the misuse of technology, protect the integrity of their operations and ensure compliance with current regulations.

In navigating this new challenge, companies must pay attention to the following eight key points to manage risks related to AI and other technologies:

1. Identify And Manage Emerging Risks

Companies must have a clear and detailed process to identify, assess and manage internal and external risks associated with new technologies. This process must ensure that risk management is aligned with current regulations, established company policies and industry best practices, thereby protecting security, privacy and corporate integrity.

2. Integrate Technological Risks Into Business Strategy

Risk management related to the use of AI and other emerging technologies must be an integral part of companies’ strategic planning and decision making. This requires not only identifying and assessing these risks, but including them as essential factors in defining objectives, growth plans and operations, ensuring adequate preparation to mitigate potential negative impacts on the business.

3. Establish Strong Governance For The Use Of AI

Companies must implement a strong governance structure that oversees AI use, ensuring effective risk management and guaranteeing responsible, ethical, trustworthy and transparent use. Boards must approach AI just as seriously as they do other critical issues, recognizing its impact on corporate obligations, opportunities, risks and, especially, on all the company’s stakeholders. To this end, many boards are forming specialized AI committees and participating in ongoing training to keep up with technological advances and their potential impacts.

4. Learn How Third Parties Are Using AI

It is very important for companies to know how their partners, suppliers and other external third parties are using AI and managing its risks, because it can directly impact their operation, security and corporate reputation. Therefore, it is also necessary to review how third parties are using such technology to prevent them from affecting the company, in addition to requiring them to implement adequate controls to minimize the risks associated with the use of AI.

5. Ensure AI Practices Are Transparent And Understandable To Stakeholders

It is essential that companies communicate clearly and accessibly how they are using AI, including the risks associated with their activity and the measures they are implementing to mitigate potential negative impacts. This will strengthen stakeholder trust and stand as a clear sign of transparency and good governance.

6. Recruit New Talent

New technological tools and systems require expert professionals with specialized skills, capable of effectively integrating technologies such as AI into corporate processes, as well as implementing and managing solutions in areas such as cybersecurity and data protection and analysis. The recruitment of new talent will not only allow the company to remain competitive but also increase its ability to adapt to rapid technological evolution.

7. Train Employees In The Use Of Emerging Technologies

When employees receive training on AI and other emerging technologies, their integration into corporate governance is facilitated by making everyone aware of the policies and controls, as well as their ethical and responsible use. This protects the company, its members and all its stakeholders, allowing them to make the most of the potential of this innovation.

8. Monitor The Use Of AI In All Areas Of The Business

Continuous monitoring is essential to ensure responsible implementation, in line with corporate policies and regulations. This requires verification that AI is used exclusively for its intended purposes and that there are appropriate human oversight mechanisms to ensure correct use and alignment with the company’s values.

It is undeniable that AI and other emerging technologies can increase company productivity, offering a considerable competitive advantage. However, their implementation also poses significant ethical challenges, especially given their still uncertain impact and the lack of regulation due to their rapid advancement. Therefore, companies must address this issue seriously, and establish adequate controls to manage associated risks. Otherwise, they could face an investigation by the DOJ and could also incur serious penalties, reputational damage, financial losses and, worse yet, lose the trust of their stakeholders.

 

BY Susaba Sierra
Published in Forbes