The mosaic of AI regulation in Latin America: where we are at

8 minutes
3/22/2024
The mosaic of AI regulation in Latin America: where we are at
Image created by Dall-E

No matter when you read this: a project to regulate artificial intelligence (AI) is being discussed in Latin America. Even before these technologies became available to everyone, legislators and governments in the region have been proposing, from very different approaches, measures to address the challenges of these technologies in their countries. 

The flurry of initiatives includes all kinds of approaches. While in Chile one project seeks to aggravate crimes committed with the help of AI, in Brazil another focuses on banning sexual deepfakes. In Peru, meanwhile, one more aims to reform the Constitution and make AI a principle in the administration of justice to resolve court cases. 

Variety is also in the gestation of the proposals. In Costa Rica, for example, the project for the Artificial Intelligence Regulation Law was drafted directly by ChatGPT, following the instructions of the proponents, who asked him to think as a lawyer and legislative advisor to generate a regulatory proposal that would take into account the Constitution of that country. 

The result is a text that generically brings together globally discussed principles and even suggests the creation of a regulatory authority, without explaining how it would be composed or what its institutional dependence would be. In short, an example of the negative uses and limitations of AI, and, at bottom, a sample of proposals that lack a multisectoral dialogue as a background. 

Although in the middle of last year Peru sanctioned a framework regulation with general principles and public objectives for IA, no country in the region has yet passed a law regulating its use and development.

A report published a few weeks ago by the organization Access Now presented a mapping of the AI regulation initiatives discussed in Latin America. The document, which took into account the projects discussed until December 1, highlights the relevance of the topic in the public agenda and the general intention to prevent the main risks of AI use.

A frequent difficulty with regulations is to come up with a definition that covers the broad spectrum of what is meant by AI and that manages to include in the same bag technologies as different as text generation models such as ChatGPT, mobility applications such as Waze, social network algorithms or facial recognition systems, among others. 

A proposal in Colombia highlights the failures of legislators in trying to address this problem. One article in the draft defines AI as programs "that perform tasks comparable to those performed by the human mind, such as learning or logical reasoning", which not only ignores how AI systems work - which are based on predictive models - but also that the procedures by which the mind operates have not yet been fully discovered, as the document points out. 

In the midst of the proliferation of proposals, there are common points to several of them, as many are nourished by regional or global debates, such as the Unesco Recommendation on the Ethics of Artificial Intelligence, the Montevideo Declaration on AI or the Artificial Intelligence Law recently approved by the European Union.

The latter standard is based on a risk system, according to which the higher the risks of an AI affecting human rights, the higher the requirements for developers. The scale classifies AIs from minimal to unacceptable risk. The latter category prohibits, for example, chatbots that promote dangerous behavior or social rating systems. Among others, the Law contemplates impact assessments to eliminate or mitigate risks, transparency obligations and mechanisms to ensure human oversight of the systems.

Before the standard was approved in the European Union, its scheme was taken as a model on the other side of the Atlantic, as can be seen in projects discussed in Argentina, Brazil and Chile. 

For Access Now, this approach could be problematic, as it fails to have the protection of rights at the center of regulation. "When public policies are designed based on risk-based approaches, human rights guarantees are negotiated on the premise that they must be balanced with other values such as innovation," the report reads

On the contrary, the organization suggests that AI regulatory projects reinforce the rights of the most vulnerable people and uncompromisingly oppose AI tools that infringe on human rights and dignity, as is the case with applications for remote biometric identification or for automatically detecting gender. 

The document also highlights the need for regulations in Latin America to be designed without losing sight of the social, economic, cultural and political realities of the region, which pose very different challenges to those of the global north, whose regulatory frameworks are being taken as a reference. 

Moreover, precedents such as those of the European Union's Digital Services Act, whose entry into force has highlighted the difficulties of its implementation, are a warning to design supervisory mechanisms and structures to ensure compliance with the law.

As the effects of AI continue to spread to social life, the information ecosystem and political campaigns, interest in regulating it will continue to grow. However, democracies in the region, as in much of the world, face the challenge of containing a highly technical issue that moves much faster than legislative procedures.

This article originally appeared in Botando Corriente, our newsletter. You can subscribe here:‍
By:
go home