The new wave of online child safety laws

6 minutes
2/7/2024
The new wave of online child safety laws
"Analog collage depicting abstracted teenage figures interacting with smartphones in a red and black contrast," interpreted by Dall-E.

"No one should have had to go through the things your families have gone through," Mark Zuckerberg told family members of teenagers affected by social media use last week. The act of forgiveness grabbed the main headlines of the latest appearance of social networking companies in the U.S. Congress. Beyond this obligatory 'performance' by the CEO of Meta, on the horizon are the various legislative initiatives that are being processed in this and other countries to protect minors online and that could change the rules of the Internet game.

For years, the platforms have been accused of affecting their users -especially the youngest ones- on several fronts: for exposing them to illegal content, for promoting conversations in favor of eating disorders, for allowing bullying, or for facilitating the trafficking of child sexual exploitation material.

For example, so far in 2023 and so far in 2024, the Te Protejo reporting line, developed by the Colombian organization Red PaPaz, has received 39,000 records of situations that threaten or violate the rights of children and adolescents, of which 97% correspond to digital spaces.

Based on these concerns, a wave of regulations to protect minors has begun to be issued across the United States. In 2022, the state of Louisiana passed a law under which any company that distributes material harmful to minors on websites will be liable if it does not have reasonable age verification methods in place. Since then, seven more states have adopted similar laws and thirty more are expected to do so later this year. 

For organizations such as the Free Speech Coalition (FSC), initiatives of this kind may end up affecting users' online freedom of expression, anonymity and privacy. According to these rules, Internet companies can check the age of users through official identity documents, bank details or commercial databases. For FSC, these proposals may be inconvenient: on the one hand, users could easily circumvent legal obstacles by using VPNs (virtual private networks); on the other hand, the transmission of sensitive data, such as identity documents, opens up security risks. 

While these regulations are emerging locally, a nationwide bill is making its way through the U.S. Capitol. The Child Online Safety Act (KOSA) seeks to prevent the exposure of minors to dangerous content. However, the bill foresees that the decision on what is considered dangerous will be in the hands of the attorneys general of each state, which for some sectors of civil society could lend itself to arbitrary limitations on freedom of expression. 

According to the Electronic Frontier Foundation, the design of this law could encourage platforms, in order not to be sanctioned, to suppress conversations that discuss issues such as racial discrimination or gender identity, which in some contexts in the United States have been identified as sensitive topics for children. 

Discussions on the regulation of the Internet to protect minors are also being challenged by the question of how much the networks really affect the mental health of young people. 

In recent years, there has been research indicating a link between platform activity and the epidemic of mental health illness in adolescents. Even an internal Instagram investigation, known through the company's 2021 document leak, revealed that the platform worsened the body image issues of one in three teenage girls. 

In 2023, a report by the U.S. Surgeon General indicated that despite certain benefits of social networks for adolescents - receiving support from people in difficult times, being in contact with other people or having a space to express their creativity - the platforms also have the potential to affect the mental health of minors. 

However, recent studies have given rise to some level of skepticism. In the same space in which he offered apologies to the victims, Mark Zuckerberg cited a National Academy of Sciences report suggesting that there is insufficient evidence to establish a causal link between deterioration in mental health and social network use, a view that coincides with that of research from the Oxford Internet Institute published in late 2023. 

However, these inputs indicate that more research is needed to reach a definitive conclusion, which requires access to more data from the platforms. "This data exists and is continually analyzed by global technology companies for marketing and product improvements, but unfortunately it is not accessible to independent research," the Oxford paper notes

Beyond the level of precision that can be achieved on this point, child safety laws on the Internet pursue a desirable goal: to protect children and adolescents in digital spaces and make digital platforms responsible for their own products. "The principles of safety and ethics by design must become minimum requirements for the development of products and services, and it is imperative that they are regulated and mandatory for all actors involved," assures the organization Red PaPaz.

This article originally appeared in Botando Corriente, our newsletter. You can subscribe here:
By:
go home