What is the AI industry doing to protect election integrity?

6 minutes
What is the AI industry doing to protect election integrity?
"Contemporary digital artwork showcasing a robot politician in a realistic setting", Dall-E.

In the run-up to the 2024 election calendar, which includes more than 60 elections around the world, OpenAI - the company behind ChatGPT - announced measures to address the problematic uses its technologies can have in misleading voters and distorting public debate.

In its press release, OpenAI states that it is now forbidden to use its services to develop chatbots that impersonate candidates or public institutions or that seek to dissuade people from participating in democratic processes with false information. In addition, the company clarified that it seeks to prevent its models, which allow mass content creation, from being used to carry out influence operations.

Already in 2023, the rise of artificial intelligence gave rise to several cases in which synthetic audio, text and video content was used to attack rivals in political campaigns, as happened with the presidential elections in Slovakia and Argentina. In turn, in recent years artificial intelligence has been used by regimes in Latin America to promote official propaganda or to attack political opponents in countries such as Nicaragua and Venezuela.

The company also indicated that while it is still evaluating how effective its technologies can be in persuading people, it will be prohibited from developing applications for political campaigns and lobbying. Despite the ban, the advantages of using AI for these purposes will surely be very tempting for many parties and political marketing agencies in 2024, more so when its effectiveness has been proven.

This was the case last year in the United States, when the Democratic Party tested sending AI-generated messages to raise funds. As a result, the campaign got more response from the public and more donations were received than in cases where the texts were written by people.

OpenAI's announcements come at a time when the potential of these technologies to escalate false narratives on social media is considered a serious threat to humanity. For example, the Global Risks Report recently presented at the World Economic Forum ranked disinformation and misinformation as the top near-term global risks, ahead of the climate crisis or international armed conflicts.

According to the document, disinformation in electoral contexts, driven by AI techniques, "could seriously destabilize the real and perceived legitimacy of newly elected governments, present risks of political unrest, violence and terrorism, and an erosion of democratic processes in the long term."

While it is important for OpenAI to make progress in designing standards that prevent the most harmful effects of AI, its efforts seem insufficient to contain this phenomenon. AI development companies need to invest resources to build content moderation equipment and systems that allow them to enforce their own standards, an issue with which OpenAI has already struggled.

Recently, for example, it became known that the GPT Store, OpenAI's marketplace for offering ChatGPT-based models created by other users, had been filled with bride simulators, despite the fact that its rules prohibit developing models that offer synthetic romantic relationships.

OpenAI is a major player in the industry, but containing AI risks involves a larger ecosystem that also includes social networking platforms and other development companies that are not on the radar of public opinion, the press or regulators, and therefore have less pressure to develop secure products.

Even other industry bigwigs are showing modest efforts to protect election integrity. Last December, Google announced that it would restrict election-related searches in Bard, the product with which it competes with ChatGPT, as well as in its "generative search experience," which feeds AI answers into the traditional search engine. However, its policy on the use of generative AI does not include specific rules on the creation of political or election-related content, and is limited to prohibiting the use of its platforms to generate and distribute misleading content, a broad clause that does not aim to fully cover the potential impact of this technology on democracy.

This article originally appeared in Botando Corriente, our newsletter. You can subscribe here:

go home