A new report suggests that the lax content moderation policies of Mastodon and other decentralised social media platforms have led to a proliferation of child sexual abuse material. Stanford’s Internet Observatory published new research this week that shows that decentralised sites have serious shortcomings when it comes to “child safety infrastructure.” Unfortunately, that doesn’t make them all that different from a majority of platforms on the normal internet.
When we talk about the “decentralised” web, we’re of course talking about “federated” social media or “the Fediverse”—the loose constellation of platforms that eschew centralised ownership and governance for an interactive model that instead prioritises user autonomy and privacy. The Fediverse runs on a series of free and open source web protocols that allow anyone to set up and host social communities via their own servers, or “instances,” as they’re known. Among the limited bevy of platforms that make up this decentralised realm, Mastodon is one of the most popular and widely used on the web. Still, next to the centralised internet, decentraland is markedly less trod territory; at its height, Mastodon boasted about 2.5 million users. You can compare that to Twitter’s recent daily active user numbers, which hover somewhere around 250 million.
Despite the exciting promise of the Fediverse, there are obvious problems with its model. Security threats, for one thing, are an issue. The limited user friendliness of the ecosystem has also been a source of contention. And, as the new Stanford study notes, the lack of centralised oversight means that there aren’t enough guardrails built into the ecosystem to defend against the proliferation of illegal and immoral content. Indeed, researchers say that over a two-day period they encountered approximately 600 pieces of either known or suspected CSAM content on top Mastodon instances. Horrifyingly, the first piece of CSAM that researchers encountered was discovered within the first five minutes of research. In general, researchers say the content was easily accessible and could be searched for on sites with ease.
The report further breaks down why the content was so accessible…
…bad actors tend to go to the platform with the most lax moderation and enforcement policies. This means that decentralised networks, in which some instances have limited resources or choose not to act, may struggle with detecting or mitigating Child Sexual Abuse Material (CSAM). Federation currently results in redundancies and inefficiencies that make it difficult to stem CSAM, NonConsensual Intimate Imagery (NCII) and other noxious and illegal content and behavior.
Gizmodo reached out to Mastodon for comment on the new research but did not hear back. We will update this story if the platform responds.
The “centralised” web also has a massive CSAM problem
Despite the findings of the Stanford report, it bears consideration that just because a site is “centralised” or has “oversight” that doesn’t mean it has less illegal content. Indeed, recent investigations have shown that most major social media platforms are swimming with child abuse material. Even if a site has an advanced content moderation system, that doesn’t mean that system is particularly good at identifying and weeding out despicable content.
Case in point: in February, a report from the New York Times showed that Twitter had purged a stunning 400,000 user accounts for having “created, distributed, or engaged with CSAM.” Despite the bird app’s proactive takedown of accounts, the report noted that Twitter’s Safety team seemed to be “failing” in its mission to rid the platform of a mind-boggling amounts of abuse material.
Similarly, a recent Wall Street Journal investigation showed that not only is there a stunning amount of child abuse material floating around Instagram, but that the platform’s algorithms had actively “promoted” such content to pedophiles. Indeed, according to the Journal article, Instagram has been responsible for guiding pedophiles “to [CSAM] content sellers via recommendation systems that excel at linking those who share niche interests.” Following the publication of the Journal’s report, Instagram’s parent company Meta said that it had created an internal team to deal.
The need for “new tools for a new environment”
While both the centralised and decentralised webs clearly struggle with CSAM proliferation, the new Stanford report’s lead researcher, David Thiel, says that the Fediverse is particularly vulnerable to this problem. Sure, “centralised” platforms may not be particularly good at identifying illegal content, but if they have to take it down they have the tools to do it. Platforms like Mastodon, meanwhile, lack the distributed infrastructure to deal with CSAM at scale, says Thiel.
“There are hardly any built-in Fediverse tools to help manage the problem, whereas large platforms can reject known CSAM in automated fashion very easily,” Thiel told Gizmodo in an email. “Central platforms have ultimate authority for the content and have the capability to stop it as much as possible, but in the Fediverse you just cut off servers with bad actors and move on, which means the content is still distributed and still harming victims.”
“The problem, in my opinion, is not that decentralisation is somehow worse, it’s that every technical tool available for fighting CSAM was designed with a small number of centralised platforms in mind. We need new tools for a new environment, which will take engineering resources and funding.”
As to which social media ecosystem suffers from a “larger” CSAM problem—the centralised or the decentralised—Thiel said he couldn’t say. “I don’t think we can quantify “bigger” without representative samples and adjusting for user base,” he said.