For a year, the war in the Middle East has entered us through our eyes, which is another way of saying that it has entered us through social networks. In this time, explicit images of war, inflammatory speeches and the need to know what is true in the midst of so much noise and fire, has put the platforms as a stage in flames and firefighter.
Since October 7 last year, terrorist attacks and military actions have been accompanied by a digital offensive. As Hamas operatives targeted civilians in Israel, more than 40,000 accounts - largely fake - flooded Facebook, Instagram, TikTok and X with posts justifying the attacks, showing the group's alleged compassion for Israeli hostages and using images from other contexts to distort the events.
It was the first in a series of operations that have competed for control of online truth and influence. In contrast to the other side, in June of this year, for example, a network present on Facebook, X and YouTube was known to be systematically engaged in praising Israel's military actions and demanding the release of the hostages. The operation was deployed in the comments section of media outlets and publications of political figures in the United States and Canada, even if they had nothing to do with the war. Behind the stunt was STOIC, a Tel Aviv-based marketing and business intelligence agency, which reportedly used the services of OpenAI - the company that developed ChatGPT - to create the scaled content and information for the fake profiles.
Although advances in artificial intelligence postulated this technology as a new weapon of mass destruction for this conflict, its impact has been more episodic than widespread. As has also been the case in elections around the world this year, the damage has been not so much in the false content that can be created, but in the possibility of claiming that real content is the product of AI and thus undermining its credibility.
In any case, the wave of disinformation associated with the conflict put regulators under pressure and was a first test for the European Union's Digital Services Act, an ambitious regulation that, among other things, aims to curb online disinformation. To deal with the situation, the European Commission, which is in charge of implementing the rule, had to hire external digital investigators, given the lack of technical resources to deal with the emergency.
Explicit war content, including images of bombings and wounded and mutilated bodies under rubble - not to mention heartbreaking videos of child victims - have been fuel for recommendation algorithms. A few days into the conflict, X adjusted its policies to capitalize on the violent content of the war with the idea that the events would be discussed in real time, despite the sensitivity of the images.
This crisis is compounded by difficulties in sharing information and speaking out about the war. With alarms going off to avoid speeches in favor of terrorist groups, as well as anti-Semitic or Islamophobic speeches, the platforms' moderation systems have failed many times.
During the early days of the conflict, a journalist spoke on Facebook about an interview he had conducted in the past with Hamas co-founder Abdel Aziz Al Rantisi. In the post, the journalist described his trip to Gaza, his meeting with members of that group and his conversation with the leader. The mention of Hamas, however, earned him the company's removal of the content for allegedly violating its policy on dangerous people and organizations, which prohibits supporting terrorist groups.
Under this misapplied standard, Meta also removed a Washington Post article with a timeline of the Arab-Israeli conflict and testimonies of women victims of sexual violence by Hamas, as learned through three cases reviewed by the company's content advisory board.
In the troubled waters of digital discourse, the actions of some platforms have ended up unjustifiably limiting messages of resistance, mistakenly taken as hate speech.
This is the case of the slogan "From the river to the sea, Palestine willbevictorious" (or its English version: "To the river to the sea, Palestine will be free"). The slogan, largely used in support of the self-determination of the Palestinian people, has also been interpreted as a call for Jewish extermination. Moreover, being included in the 2017 Hamas Charter, in some cases it has been removed from Meta's social networks for glorifying, in theory, this terrorist group.
A few weeks ago, Meta's Content Advisory Board called attention to the practice of systematically removing content that includes this slogan, as it does not in itself represent a harm or a violation of the company's own rules. In addition, the body recommended Meta to modify its policies so that expressions such as "Israelis are criminals" are not taken as a sign of hate based on nationality, but as a sign of outrage against abuses and crimes committed by a State.
The conflict in the Middle East - which has become terrorism, repression and crossfire - has turned the digital environment into an extension of the battlefield. In the meantime, the war is spreading virally to other countries, while the possibility of the networks offering us a real context of what is happening is receding.
He is editor of Circuito and coordinator of the Green Lantern content moderation area.