These are the rules of the platforms for these elections

2/23/2022
These are the rules of the platforms for these elections

Many things have changed in the digital public debate in Colombia since the last elections. The pandemic, the tense campaign in the United States in 2020 and its dramatic epilogue with the seizure of the Capitol, among others, left their mark on the norms of social media, which in different ways tried to deal with the problematic content that flooded their platforms. 

While companies already had rules related to election contexts, it is clear that some new rules are a direct reaction to events in the United States. Facebook, for example, banned messages that sought to dissuade voters from going to the polls because of the risk of catching covid-19. Twitter, for its part, announced sanctions for those posting misleading information about the results

Although these changes were introduced gradually and were made in the face of the race between Donald Trump and Joe Biden, it may influence the electoral discussion in Colombia -as it has already happened in other parts of the world-. In Circuito we scanned the policies of Facebook, Twitter, YouTube and TikTok and this is what we found.

Platforms' concerns

In general, companies have oriented their election policies to attack four fronts. The first of these is disinformation, understood as content intended to mislead about the conditions of an election or the qualities of a candidate. The rules that the platforms have established include prohibitions on publishing false information about voting dates and locations, vote-counting processes or the requirements for a person to be elected. 

Among the different forms of disinformation on social media, there is also manipulated content, which in some platforms has a specific policy. These are videos or images altered to deceive or confuse, such as falsely showing that a candidate said something that he or she never actually said. This type of disinformation applies to such sophisticated editing methods as deepfakes - where artificial intelligence is used to create images that appear real - but refers even to simpler forms of deception, such as sharing a photo without context or with a false context.

This tweet is an example of false information about candidate eligibility.


The second category are rules aimed at preventing interference and intimidation in electoral processes. This is content that affects voter participation by dissuading them from attending the polls or inciting them to disrupt the process. This is the case of publications that affirm that going to vote will result in problems with the police, or those that call citizens to attack voting places or to show up at them with the purpose of intimidating judges or observers.

Thirdly, there are prohibitions directly related to the commission of electoral crimes. In this sense, only Facebook has rules to prevent its platform from becoming a means for fraud, by attacking publications offering to buy votes or supporting illegal participation in an election.

Another category is made up of rules to prevent the delegitimization of an election process or results. These kinds of rules prevent a candidate from claiming victory without official information, or claiming that the results have been altered.

What happens if someone violates these rules?‍

Beyond the postulation of the rules, the fundamental question is how they are applied and what are the consequences of non-compliance. Some platforms merely state the prohibition, while others describe in detail the resulting effect. 

Although Facebook tends to explain in detail what content is prohibited and how any of its rules could be violated, it does not define the sanctions or only mentions them briefly. Of the policies that could be applied around an electoral process, only two are expressly stated: the removal of manipulated content and the removal of accounts that impersonate another person or organization. In any case, these clarifications do not resolve questions about the processes, such as what could happen in cases of recidivism.‍

Twitter, for its part, has a whole catalog of measures depending on the severity and history of non-compliance of users, including the deletion of tweets, the temporary or permanent blocking of an account and the application of tags to give more context to a publication or to warn about its content. For its civic integrity policy - where it groups rules on elections and other citizen processes - Twitter has a system of strikes where accumulated misdemeanors trigger higher penalties. After five violations, the platform can permanently suspend an account.

YouTube works along the same lines, and for all its policies it has a system of offenses. In the event of non-compliance, the platform can restrict the publication of videos or stories, the creation of personalized thumbnails or the editing of playlists, among other measures. Upon accumulating three offenses in a period of 90 days, the channel will be permanently deleted. 

Despite the platforms' recent progress in election-related policy development, it is worth asking how active their application will be outside the United States.

These special rules on electoral processes must be understood in the broader scenario of the general rules and other considerations of the platforms. For example, under Facebook's cross-checking system, before sanctioning a high-impact account - of a person with public recognition - an additional review is done by a content moderator, the markets team or even a company executive. 

Facebook and Twitter also have a public interest exception, according to which a banned content will not be removed because of the importance of knowing what a public figure - an official, a candidate or a political party - thinks about a certain topic.

Despite violating its civic integrity policy, Twitter decided not to remove this 2020 tweet from Donald Trump in which he claimed that voting by mail allowed a person to vote more than once.


Finally, despite the platforms' recent progress in developing election-related policies, it is worth asking how active its application will be outside the United States. As was known last year, Facebook has had failures to detect banned content in languages other than English and to understand specific contexts. It is possible that the presidential elections in Colombia do not represent a very significant concern for the platforms, which designed their policies with an eye on other places. This is the case of a YouTube rule prohibiting claims of electoral fraud, which only applies to the United States and the German federal elections in 2021.

If some platforms walk, TikTok barely crawls‍.

Despite its massive growth in recent years, TikTok's community standards are not as developed as those of other platforms. Its entire election policy boils down to a broad clause that prohibits posting "content that misleads members about elections or other civic processes." This formula does not delve into what might be considered misleading content, nor does it provide insight into what would happen if, for example, someone wanted to claim an early victory.

This indeterminacy is problematic when for the first time in Colombia political candidates are getting on the bandwagon of short videos and trends of this social network, where for the time being not all the harmful content fronts for an electoral context are covered.

It is important to note that just because these new rules exist does not mean that other more general rules do not apply. For example, even if TikTok's election policy is reduced to a sentence about disinformation, a threat to an election official would likely be punishable under the violent extremism rule. 

In any case, beyond the existence of sufficiently detailed rules, the question remains on the table as to how much the platforms' moderation teams will intervene in countries such as Colombia or Chile -the former with the general elections and the latter with the exit plebiscite of the constitutional convention-. Other elections in the region -such as those in Chile or Peru- have not merited the deployment that the platforms set in motion for the United States or Germany. 

Either scenario will present challenges and tensions: if the platforms do not intervene, criticism will focus on the lack of consistency and scalability in content moderation. On the contrary, if the social media decisively apply their rules, questions will be directed at the legitimacy and validity of such decisions and the possibility of challenging them. A candidate who is suspended from any platform or whose content is removed will probably denounce it publicly as an act of censorship.

The following table indicates what kind of election-related content the platforms prohibit. The columns to the right indicate whether the platform has a specific rule for that type of publication. 


By:
go home