In every industry, there is someone who does the dirty work. In the case of social media companies, the job falls to content moderators. These people dive into the sewage of the platforms to review the content that users share and sanction the content that is not allowed, either for breaking community standards or the law directly.
Every day users upload millions of publications that are automatically reviewed by artificial intelligence systems, which evaluate whether the content is suitable to be on the platform even before it is on the air. These systems are trained to detect images related to terrorist acts or sexual exploitation of minors, among others.
Automated pre-screening, however, is more the exception than the rule. The bulk of content moderation work falls to flesh-and-blood agents who try to put internal manuals and hours of training into practice with problematic material. These are tens of thousands of people working all over the world, usually hired through outsourcing and customer services, such as Teleperformance or Accenture; companies that earn up to $500 million a year for cleaning up problematic content on a social network like Facebook.
There are operational and, above all, economic reasons for leaving this service in the hands of third parties. In the United States the differences between the salaries of Facebook employees and those of outsourced moderators is abysmal: while the former receive an average of 240,000 dollars a year, the latter only 28,800, a gap that increases notably when looking at similar contracts in countries with much lower minimum salaries.
With the enormous amount of posts that are reported daily on the platforms, moderators have less than a minute to make a decision. The high pressure to deliver results, the shortage of time and adequate inputs to carry out these tasks, only leads to moderation errors, which can have two negative results: leaving a problematic content online or removing one that does not violate the rules, affecting the freedom of expression and due process of users.
There is another problematic effect beyond moderation, and that is the impact on the mental health of moderators. Several studies have shown the negative effects of continuous exposure to problematic content on social media. That is, these people are affected by exposure to material that they should precisely remove from the platforms: explicit violence, animal abuse, child abuse, threats. In fact, a recent Forbes report denounced that images and videos of sexual abuse of minors were being used by Teleperformance to train TikTok moderators.
Depressive thoughts are also a constant: a Twitter moderator in Manila told the Washington Post that his brain recycled images he saw during his workday and he used to have dreams in which he was the victim of a suicide bombing or a violent accident.
A 2019 The Verge report on the working conditions of content moderators in a Phoenix office laid out a dysfunctional work scenario. To cope with the bombardment of disturbing posts, moderators smoked marijuana on breaks and often did their jobs stoned. In addition, the constant exposure to conspiracy theory videos ended up distorting their perception of reality: many began to deny episodes such as the Holocaust or the September 11 terrorist attacks.
Depressive thoughts are also a constant: a Twitter moderator in Manila told the Washington Post that his brain recycled the images he saw during his workday and he used to have dreams in which he was the victim of a suicide bombing or a violent accident. A TikTok moderator said that his experience in that role had been more traumatic and challenging than his years in the U.S. Army.
On more than one occasion, the platforms have been sued by groups of moderators who allege serious damage to their mental health. In 2020, Facebook paid $52 million to 11,250 people who developed post-traumatic stress syndrome or similar disorders from performing these tasks. In the middle of this year, YouTube settled for $4.3 million to settle a similar lawsuit.
Difficult working conditions, coupled with the growing demand from social media companies, have resulted in high turnover in the companies providing these services. As a result, leadership in moderation centers has been left in the hands of employees without sufficient experience to adequately manage the relationship with workers and the sensitive material that reaches them on a daily basis.
Content moderation is an industry in constant growth: it is projected to reach a value of US$11.8 billion within five years. As in other areas, scandals and public pressure will force platforms to take steps to change their practices and try to improve the conditions under which this activity is carried out. However, this is a link in the chain that is not much talked about. Content moderation has a basic and almost existential problem: the difficult conditions under which content moderators work.