Platforms and Russian invasion: the iron curtain of the Internet is emerging

3/21/2022
Platforms and Russian invasion: the iron curtain of the Internet is emerging

It is said that in wars the first casualty is the truth. So, in Russia's invasion of Ukraine, the Internet has also been a battleground for information. In the midst of the war we have seen waves of disinformation, media and journalists being tagged, channels deleted and viral videos about alleged nuclear bombs. What's going on? Here's a look at some of the highlights of the digital front of this conflict.

Meta takes sides

Very soon after Russia's invasion of Ukraine on February 24, Meta took sides. Its position was defined by two decisions. On the one hand, the company announced on February 26 that it had rejected an "order from the Russian authorities" to stop tagging content produced by Russian media that was flagged as false by independent verifiers. Following the order would mean ignoring a rule that Meta has had for years for any kind of disinformation.

In contrast, on February 27, Meta announced that it would accede to the Ukrainian government's request to restrict access from Ukraine to Russian government media accounts. Hours later, it announced that the restriction would apply to RT and Sputnik media accounts in European Union countries. Such actions are not provided for in the platform's rules. The request, the company said at the time, was accepted because of the exceptional nature of the facts.

Facebook alerts users when content has been flagged as fake by verifiers and gives additional contextual information.

As we will see, these events marked the beginning of a series of decisions by Russian platforms and the Russian government that challenge more than ever the idea of an open Internet subject to similar standards at the global level.

Chain reaction

Following Facebook's lead, on February 28 Twitter began tagging "Russian government-affiliated media" accounts. However, the network also tagged journalists who have written for these media, including Latin American freelancers, a decision it has reversed in some cases. For its part, Meta announced on March 1 that it would reduce the worldwide visibility of media accounts controlled by the Russian government.

Some Latin American freelancers were tagged by Twitter. The measure was criticized by freedom of expression advocates, who pointed out that in addition to confusing users, it could expose journalists to attacks.

Of course, Russia did not stand idly by. In retaliation for these actions, the Russian government decided to completely block access to Facebook and Twitter on March 4. On that very day, the Russian parliament passed a law on disinformation that punishes those who "lie" or "discredit" the country's armed forces with fifteen years in prison. Although this law, aimed at controlling information in the country, mainly affects the media and journalists, it has a direct impact on the operation of social media platforms: its approval was the starting shot for TikTok, which shortly afterwards announced that, in view of this law, it had no choice but to suspend live broadcasts and the publication of new content in Russia.

Second round

After the first restrictions, the platforms continued to act to contain Russia, in part aligned with the sanctions that Western governments were imposing. Sputnik's and RT's YouTube channels were blocked worldwide. Apps from these media outlets were removed from Google Play in Europe and Apple's App Store globally. 

But the tension between Russia and Meta increased when on March 10 the platform announced that it would relax its content policies to allow users of its social media in Ukraine to wish death or incite violence against Russian soldiers, Vladimir Putin or his ally Alexander Lukashenko, president of Belarus. These kinds of posts would normally be banned under hate speech rules. This is not the first time Meta has made such exceptions in difficult contexts: in 2021, it allowed Ayatollah Khamanei to be wished dead during that year's protests in Iran.

In response, Russia's prosecutor general announced on March 11 that he would seek to recognize Meta as a terrorist organization and ban its other activities in the country. In compliance, on March 14, the government blocked access to Instagram in Russia.

Disinformation explosion

As with other events of global interest, the invasion of Ukraine has triggered disinformation on networks. When there are no reliable sources to supply the need to know what is happening in Ukraine, false or misleading information is called upon to fill the void, as John Silva, director of the News Literay Project, explained to the AP.

This is the first major conflict of the TikTok era, a platform that allows editing and sharing videos to reach millions of people quickly, without users paying much attention to the veracity of the content. For example, a video with the title Russia Nuclear Bomb went viral on this social network, which reached 18 million views before being removed for violating integrity and authenticity policies. There have also been reports of fake live transmissions of users who, with altered material or background noises, claim to be in Ukraine to ask for donations. 

The decontextualized content has made a career amid the bombardment of information. The video of a reporter on the back of dozens of body bags went viral because in the background of the footage one of the alleged deceased is seen moving and even uncovering his face. Many users believed that this was proof that the information about the Ukrainian deaths was just dirty propaganda against Russia. However, it later became known that the images had nothing to do with Ukraine, but had been recorded during a protest in Vienna, where protesters were seeking to draw attention to the deaths that global warming can cause.

Since the beginning of the invasion, Facebook, Twitter and YouTube announced that they had dismantled account coordination networks that spread disinformation and functioned as content factories against Ukraine, which they pointed to as a failed state and betrayed by Western countries. Traditionally, this has been the way platforms seek to reduce disinformation. Instead of removing fake content (an action reserved for exceptional cases, as with Covid-19), platforms punish coordinated inauthentic behavior, a practice that brings together the massive and coordinated creation of fake accounts or pages to spread falsehoods or propaganda.

What do all these measures mean?

As noted by journalists Mark Scott and Rebecca Kern, the platforms were born and developed in a world without iron curtains, where the old blocs coexisted in relative calm. With the invasion of Ukraine, companies have had to react and, rather than be caught in the crossfire, they chose to quickly align themselves with the West's bench. However, their actions leave important questions and concerns.

On the one hand, it is clear that the main people affected by these measures and the retaliation of the Russian government are the inhabitants of that country, who are increasingly isolated from reliable sources of information and with fewer channels to express their rejection of the government, organize or report on what is happening in the country.

It has also become clear that, on account of the anti-Russia alignment, platforms are willing to put their own rules aside to implement radical measures that were not planned. The episode of the hasty tagging of journalists on Twitter also shows that there has been improvisation. All this brings back questions about the lack of consistency of the platforms in the application of their rules.

Finally, this battle for information revives conversations about the possible fragmentation of the network for political reasons - known as the "balkanization of the Internet". Will this be the event that will definitively change the Internet as we know it?

By:
go home