Last week, during the Summit of the Future held in New York, the Global Digital Compact, a general framework for the governance of technology and Artificial Intelligence in the world, was approved. Among other things, this new pact calls on governments and the private sector to contribute to a fund for the development of AI, commits to create an international scientific panel and establishes a global dialogue within the United Nations.
In the opinion of Vanina Martinez, member of the UN Advisory Body on AI, with the adoption of this pact, Latin America would have the opportunity to communicate to the world its perspectives on the issue and improve the role it has played so far. "In most international governance frameworks, Latin America is part of the discussion, but it is not in the discussion. The region has to impose its desires and realities," said Martinez at the democracIA forum held in Buenos Aires on September 24.
The event, organized by Luminate, Civic Compass and the International Fund for Public Interest Media, brought together experts from the region and around the world to analyze the challenges that AI presents for democracy, delve into its technical aspects and connect conversations between the global north and south.
The urgency to address the challenges of AI has prompted legislators around the world to process and discuss projects to regulate these technologies. As in other cases, European Union regulations have been the model for designing laws and other mechanisms to prevent possible negative impacts of AI.
In particular, the risk-based approach of the European Union's Artificial Intelligence Act has paved the way for other initiatives around the world. However, many voices have warned about the flaws of this system, according to which the greater the risks of an AI to affect human rights, the greater the obligations for developers.
"When public policies are designed on the basis of risk-based approaches, human rights guarantees are negotiated on the premise that they must be balanced with other values such as innovation," reads an Access Now report published in February of this year.
Moreover, this approach raises the paradox that companies must self-assess how exposed they are to these risks. "They are basically grading their own homework," Maroussia Lévesque, a research fellow affiliated with Harvard University's Berkman Klein Center, told the forum.
Beyond the failures of projects that are beginning to be implemented in other parts of the world, there is a risk that in the discussions to adapt them, the local perspective is lost. For Juan Carlos Lara, executive co-director of Derechos Digitales, importing models implies avoiding the democratic debate necessary for our societies to find the solutions they require for their specific problems, which are naturally not the same as those in Europe or the United States.
According to Claudia Lopez, researcher at the National Center for Artificial Intelligence of Chile -CENIA-, in a market dominated by companies from the global north, there are few incentives to promote AI development and evaluate its possible impacts in other regions. Against this backdrop, it is therefore essential that Latin American organizations, governments and academic institutions take on this task.
This implies thinking more carefully about the relationship between our societies and industry. To some extent, the business reproduces the dynamics of resource extraction: data centers of foreign companies that hire low-cost labor and process data produced by people from the region are located in the region, without necessarily extending those benefits to the communities, as Paola Ricaurte, researcher at the Department of Media and Digital Culture of the TEC de Monterrey, pointed out. "We export pear and import canned pear," said Luciana Benotti, a researcher at the University of Córdoba, about these extractive models.
Given the lack of development of proprietary technologies, the region's governments become preferred clients of these companies. "This asymmetry ends up having an impact on the vision of public policies," added Ricaurte.
Another major concern surrounding AI is the possibility of reproducing biases, an issue that has been included in some regulatory projects. Benotti pointed out the possibility of correcting biases in the databases that feed the language models so that they do not exclusively reflect dominant values. This requires mechanisms to access these data and train them with others that represent cultural and linguistic diversity, as well as qualified personnel from the region, who, given the current situation, are fleeing.
The avalanche of elections this year brought with it the first glimpses of AI as a weapon of political campaigning and disinformation in Latin America. For Franco Picatto, executive director of Chequeado, the last elections in Argentina allowed us to see the tip of the iceberg of the potential damage that these technologies can cause.
Despite forecasts, evidence does not yet show a massive use of advanced AI techniques for electoral disinformation, but a frequent use of cheapfakes, more crafted and less realistic montages. In any case, there are warnings about attacks through synthetic sexual content, a practice that especially exposes women, as indicated by Patricia Villa-Berger, associate researcher at the Instituto Tecnológico de Estudios Superiores de Monterrey.
More than regulatory approaches, experts agree on the need to amplify efforts to improve digital literacy in the region and educate people to detect misinformation and not contribute to amplify it.