Last week, the U.S. Supreme Court began hearings to hear arguments from the parties involved in two cases that could affect the way social media have functioned and the structure of the internet as we know it.
In the first, Gonzalez v. Google , the plaintiffs are the parents of a young student who died as a victim of the terrorist attacks in Paris in November 2015, attributed to the Islamic State. According to the plaintiffs, Google contributed to the tragedy because YouTube's algorithms, through the recommendation system, exposed users to extremist content.
On the other hand is Twitter v. Taamneh, a similar case in which relatives of a person killed in a 2017 terrorist attack in Istanbul seek to hold the company liable for failing to timely remove content from its platform through which the Islamic State recruited and trained terrorists.
At the heart of the matter is Section 230 of the Communications Decency Act, a U.S. provision that gives immunity to Internet intermediaries -including social media- regarding the content that third parties publish on their platforms and also empowers these actors to moderate conversations according to their own criteria. The rule has enabled the development of today's Internet and the growth of companies such as Facebook, Twitter or YouTube, which could not exist if they had to answer for their users' publications.
In recent years, Section 230 has been the subject of dozens of bills that have sought to eliminate or modify it in many different ways, none of which have so far succeeded. For Republicans, starting with Donald Trump, the rule has become a license to censor conservative views. For Democrats, led by Joe Biden, the rule is too lax in the face of the responsibility that social media companies should have in controlling disinformation and hate speech.
The concern is that the future of this law is now in the hands of the Supreme Court, a corporation that has traditionally attempted to solve problems through universal and virtually immovable principles, a method that seems incompatible with the constant evolution of technology, as noted by journalists Sam Baker and Ashley Gold.
The first hearing in the Gonzalez v. Google case made clear the difficulties of resolving such a technical issue through the courts. As the lawsuit seeks to hold the company liable for the decisions of its recommender systems, the judges' questions probed a purported "algorithm neutrality," a vague concept that could be posed to determine when platforms retain or lose their immunity for what they recommend to users. However, this notion is contradictory to what algorithms actually mean, which by definition always seek to prioritize some content over others.
While it is unlikely that the plaintiffs' claims will be upheld - as there has not even been an attempt to prove that YouTube recommendations or Twitter's actions did in fact lead to any user eventually joining a terrorist group - the way the Court makes decisions in these two cases may touch the cornerstone of the digital world, for as Justice Elena Kagan herself put it during the Google hearing: the justices "are not the nine greatest experts on the Internet."