The elections in Panama, held at the beginning of May, revealed the authorities' eagerness to remove content from social networks. A complex regulatory framework and the question of who should guarantee freedom of expression surround the problem.
According to an investigation by the Latin American Center for Investigative Journalism -CLIP-, since March the Panamanian Electoral Tribunal sought to remove content published on Instagram and Facebook for not complying with electoral propaganda rules.
It was a group of contents of a very different nature and directed especially against two candidates: Romulo Roux and Ricardo Lombana. The publications used artificial intelligence tools to falsely make believe that the candidates had said things about their private life or about their campaign proposals.
In one of the pieces, Roux was presented as the protagonist of Monopolio who sought to privatize or mortgage assets of the nation; in another, Lombana was made to believe that he boasted of having "broken up his family" and that of his fiancée Yira Gorricháteguei, to take "another path" together.
According to the Electoral Tribunal, the contents infringed different local laws, such as the rules of the Electoral Code that prohibit the unauthorized use of political party symbols or the personal image of third parties. Despite the authority's request for the contents to be removed, the company did not take action.
In order to curb the waves of disinformation on social networks, in recent years various countries have reformed their domestic laws or created new rules to punish these phenomena. In their eagerness to tackle coordinated operations or fake news, the ambiguity and vagueness of some regulations has opened the door to measures that may be excessively restrictive of freedom of expression.
This is the case of the Electoral Code of Panama, invoked by the Electoral Tribunal to address these contents. The norm expressly prohibits the use, for example, of images of candidates or symbols of their parties, or editing images, audios or videos for manipulation purposes. Rules of this type, outlined without further explanation and with much ambiguity, could cover memes, satire contents or political criticism in the same way.
The law even establishes prison sentences for those who manipulate "digital media in a massive way, with the purpose of altering or affecting the integrity of an electoral process". Despite the seriousness of the sanctions, the description of the conduct is too broad and could affect legitimate activities, such as participating in a chain of disinformation by mistakenly believing that a false news item is true.
In the past, some measures of the Panamanian Electoral Tribunal have been questioned for affecting freedom of expression. In October 2023, the agency ordered two media outlets and a journalist to withdraw publications about former president Ricardo Martinelli, who was then aspiring to participate in the campaign. On that occasion, the Court considered that journalistic contents against the former president -now convicted for acts of corruption- constituted a "dirty campaign".
Rules of this type have also been implemented in countries such as Venezuela, Peru and Nicaragua where, to varying degrees, the dissemination of false information may constitute a crime.
This regulatory phenomenon is due to two main reasons. On the one hand, the open authoritarian character of some states that seek to arm themselves with tools to control speech on social networks. On the other hand, the difficulty for legislators to find appropriate definitions and measures to control complex phenomena such as disinformation without creating a digital gag for their citizens.
How do platforms react when a State asks them to remove content for violating domestic laws? In general, the platforms first assess whether the content denounced by the authorities does not comply with their EU rules. If this is not the case, a legal review is carried out to check that the requests are valid and that a law does indeed prohibit a content.
According to Meta's established process, when content is deemed to be in violation of a law, the company restricts it only in that territory.
Even if content is found to be illegal in a country, Meta may refuse to remove it. "When we encounter conflicts between domestic laws and our human rights and transparency commitments, we seek to honor internationally recognized human rights principles," the company states in its policy.
However, platforms' decisions regarding these requests depend not only on their goodwill, but also on political and legal calculations. In Meta's case, the company warns that in these situations it also evaluates the risk of opposing to comply with the authorities' requirements, such as blockades, sanctions or regulatory actions against it.
Thus, companies are faced with a scenario in which they must choose between guaranteeing freedom of expression in repressive contexts or assuming possible negative effects for not complying with court orders or government requests. In many cases, the fear of the latter outweighs the duty of the former.
In the Panamanian case, it is not possible to verify what was Meta's motivation for leaving the contents online, even though -according to CLIP's information- it maintains a cooperation agreement with the Court.
In addition, recent reports indicate increased compliance by social media companies to these kinds of requests. According to research by Rest of the World, content removal requests by states have increased in countries where restrictive free speech laws have been passed in recent years, such as India, Turkey and the United Arab Emirates.
While authorities are duty-bound to seek law enforcement and report content that is illegal in their countries, such requests can be a poisoned apple of censorship. The Panamanian case raises the question of who ultimately bears the responsibility of defending human rights online and resisting attempts to disproportionately remove content under the guise of legality.