Discord CEO Jason Citron said Thursday that he found reports of child exploitation on the popular chat platform “horrifying” and that the Discord took the issue “very seriously.”
His comments, at Bloomberg’s Tech Summit in San Francisco, came the day after NBC News published an investigation into the issue.
“As a parent, it’s horrifying.” Citron said in response to questions from Bloomberg journalist Emily Chang. “We take this stuff very seriously.”
Citron noted that Discord employs a dedicated child safety team that is tasked with trying to prevent exploitation on the platform “in a way that respects the privacy of all the people who are not doing these things.”
The investigation revealed that since the platform’s creation in 2015, at least 35 child abduction, grooming, or exploitation prosecutions involved communications via Discord, and 165 child sexual abuse material prosecutions involved the platform. Additionally, NBC News identified hundreds of active Discord servers promoting child exploitation.
“What we see is only the tip of the iceberg,” said Stephen Sauer, the director of the tipline at the Canadian Centre for Child Protection.
Like many social media companies, Discord scans uploaded images and videos and compares them to a known database of child sexual abuse material, but it leaves most other moderation to communities themselves. Discord has clearly stated that it is not proactively scanning most messages that are posted in its communities.
Citron said artificial intelligence could help solve some issues around child exploitation. “One of the challenges I think that all of the folks in our industry have is that we have so many things happening at scale on the platform and it’s so hard to sort of identify things,” he said.
John Redgrave, vice president of trust and safety at Discord, told NBC News that the company was working with THORN, a company devoted to building technology solutions to detect and prevent child exploitation, on a model that could detect grooming behavior.
AI has also been criticized, however, for potentially aggravating child safety issues.
This month, the FBI warned that adults were using AI to generate manipulated images of children for the purpose of sextortion, blackmailing minors with the images for even more sexual content or money.