In late March, a group of researchers, philosophers and members of the technology industry signed an open letter calling on Artificial Intelligence (AI) companies to slow down the development of their models and products while taking security measures and establishing guidelines to avoid the risks associated with a technology that could "change the history of life on Earth". Although nothing has stopped the dizzying launch of new tools since then, widespread concern has begun to mobilize regulators and authorities, who are moving at a much slower speed than the phenomenon they are trying to tackle.
This is the case of the US Congress, which on May 16 questioned Sam Altman, CEO of OpenAI - behind which are ChatGPT and Dall-E -, or the European Union, which is once again at the forefront of technological regulation with a project aimed at imposing responsibilities on AI developers. In addition, a similar project is starting to take shape in Brazil, where the initiative was born in a stormy regulatory climate, as we commented in the last edition of Botando Corriente.
"I think if this technology goes wrong, it can go very wrong. And I want to be emphatic about this." In this day and age, the phrase could be put in the mouth of any expert concerned about the advances and risks of artificial intelligence. What is surprising is not only that it was uttered by Sam Altman, one of the most visible faces of the rise of AI, but also that it was uttered in front of the U.S. Congress.
The hearing was conducted with unusual friendliness, unlike what usually happens in these kinds of proceedings, where congressmen grill Silicon Valley talking heads with hostility, but without consequence. For some analysts, Altman gave a public relations lecture in which he showed his ability to keep the conversation from getting to uncomfortable places - such as the discussion of his systems' use of copyrighted material - and concentrated on telling the congressmen everything they wanted to hear.
Altman shared all of the lawmakers' concerns about AI and urged them to regulate their own business. He suggested that steps be taken to mitigate the jobs that could be lost and joined Professor Gary Marcus - an AI critic present in the audience - in proposing the creation of an agency that would license the development of large-scale models and establish pre-launch safety testing of new models.
Altman's speech was so successful that at one point Senator John Kennedy even asked him if he could manage such an agency should Congress approve it. When Altman replied that he was happy with his current position, the senator asked him to recommend some people to incorporate the agency.
During the hearing, options such as the creation of labels, in the manner of nutritional tables, that would make the characteristics of each product visible and promote competition based on transparency and safety, were also discussed.
Beyond the pictures and headlines, the United States has a long tradition of failing to regulate new technologies. Despite the fact that both Democrats and Republicans seem to agree on the need to impose limits on social networking companies, none of the more than 25 bills that have been introduced in recent years to eliminate or modify the liability regime for these players has managed to pass.
For journalist Parmy Olson, Altman's attitude at the hearing may respond to this dynamic in Congress: the history of recent years has given Altman the freedom to sweeten the ears of congressmen and suggest proposals that could affect his business precisely because they are unlikely, at least in the short term, to become law.
Since 2021, the European Union has been working on a text to oblige AI developers to disclose the use of copyrighted content, implement transparency obligations and audit mechanisms for algorithms that may affect human rights. The draft will now have to be discussed by members of the Parliament, the European Council and the European Commission, in a phase known as the trilogues.
The project is structured in such a way that tools are classified according to their risk level: from minimal to unacceptable. The latter are prohibited and include social classification systems for people or services that encourage users to commit dangerous activities.
The higher the level of risk, the stricter the obligations companies will have to comply with. The draft takes into account modalities such as biometric surveillance, misinformation or discriminatory language, and establishes data governance, privacy and human oversight duties for AI models.
A key point of the law is that it prohibits "subliminal techniques" to manipulate the behavior of users in a way that may cause them physical or psychological harm. Although it is understood that the rule is intended to prevent some types of negative influence, since these are subliminal techniques, i.e., techniques that operate on users without them being aware of it, the rule would apply in an ambiguous area that is difficult to control: the very consciousness of those who interact with these systems. This without forgetting that, before there was any talk of AI, the selection and recommendation algorithms of social media were already exerting a powerful influence on users' content consumption habits.
For Risto Uuk, researcher at the Future of Life Institute, the project concentrates too much on the particular damage that a manipulation technique can cause, while leaving aside those that could affect society as a whole. This is the case for phenomena such as altering the course of an election or reinforcing inequality, issues that have more of a social than an individual dimension and are not covered by the provisions on manipulation.
Another relevant point corresponds to the use of copyrighted material by models such as ChatGPT or Dall-E, which must be disclosed by the companies and could expose them to legal risks.
Although for months this factor has prompted lawsuits against the developers - the same ones Altman avoided talking about at his congressional hearing - possible copyright infringement by this means is not so easy to prove. For Sergey Lagodinsky, a member of the European Parliament involved in the project, an analogy could be made with novel writing:
"It's like reading hundreds of novels before writing your own. It's one thing to copy something and publish it. But if you're not directly plagiarizing someone else's material, it doesn't matter what you trained with."
Either way, the phenomenon is sure to fuel discussions about fair use, influence and reinterpretation of works by AI.
As in other cases, such as the Digital Services Act, this first Western initiative to regulate AI is likely to trigger what is known as the "Brussels effect", i.e. an impact beyond the borders of the European Union. The regulation is expected to be approved by the end of 2023.
As was the case with the Marco Civil de Internet or the current " fake newsbill", Brazil is a regional laboratory for the regulation of new technologies. Recently, Rodrigo Pacheco, president of the Senate of that country, presented before the corporation an initiative aimed at protecting the exercise of human rights in the face of possible threats derived from AI.
The bill is the result of a two-year process that included public hearings, a seminar with international experts and a commission of jurists. To a large extent it coincides with the project underway in the European Union, as it prohibits the use of subliminal techniques to control people's behavior with respect to their health and safety, or the use of tools by the government to evaluate citizens according to their behavior or attributes to access public policies.
The text integrates three other initiatives that since 2019 have been discussed in the country. The Direitos Na Rede coalition, which brings together fifty organizations in defense of digital rights in Brazil, has defended the discussion of the law in the Senate, emphasizing the need for the dialogue to involve different stakeholders and for the project not to be reduced to a mere transplant of foreign standards without taking into account the local context.
The bill reaches Congress at a time when the debate on the regulation of technologies has ended in a crossfire between the interests of the platforms, pressure from the opposition and government proposals to impose limits on social media.