
Child safety experts are growing increasingly powerless to stop thousands of “AI-generated child sex images” from being easily and rapidly created, then shared across dark web pedophile forums, The Washington Post reported.
This “explosion” of “disturbingly” realistic images could help normalize child sexual exploitation, lure more children into harm’s way, and make it harder for law enforcement to find actual children being harmed, experts told The Post.
Finding victims depicted in child sexual abuse materials is already a “needle in a haystack problem,” Rebecca Portnoff, the director of data science at the nonprofit child-safety group Thorn, told The Post. Now, law enforcement will be further delayed in investigations by efforts to determine if materials are real or not.
Harmful AI materials can also re-victimize anyone whose images of past abuse are used to train AI models to generate fake images.
“Children’s images, including the content of known victims, are being repurposed for this really evil output,” Portnoff told The Post.
Normally, content of known victims can be blocked by child safety tools that hash reported images and detect when they are reshared to block uploads on online platforms. But that technology only works to detect previously reported images, not newly AI-generated images. Both law enforcement and child-safety experts report these AI images are increasingly being popularized on dark web pedophile forums, with many Internet users “wrongly” viewing this content as a legally gray alternative to trading illegal child sexual abuse materials (CSAM).
“Roughly 80 percent of respondents” to a poll posted in a dark web forum with 3,000 members said that “they had used or intended to use AI tools to create child sexual abuse images,” ActiveFence, which builds trust and safety tools for online platforms and streaming sites, reported in May.
While some users creating AI images and even some legal analysts believe this content is potentially not illegal because no real children are harmed, some United States Justice Department officials told The Post that AI images sexualizing minors still violate federal child-protection laws. There seems to be no precedent, however, as officials could not cite a single prior case resulting in federal charges, The Post reported.
As authorities become more aware of the growing problem, the public is being warned to change online behaviors to prevent victimization. Earlier this month, the FBI issued an alert, “warning the public of malicious actors creating synthetic content (commonly referred to as ‘deepfakes’) by manipulating benign photographs or videos to target victims,” including reports of “minor children and non-consenting adults, whose photos or videos were altered into explicit content.”
These images aren’t just spreading on the dark web, either, but on “social media, public forums, or pornographic websites,” the FBI warned. The agency blamed recent technology advancements for the surge in malicious deepfakes, because AI tools like Stable Diffusion, Midjourney, and DALL-E can be used to generate realistic images based on simple text prompts. These advancements are “continuously improving the quality, customizability, and accessibility of artificial intelligence (AI)-enabled content creation,” the FBI warned.
“Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the Internet,” the FBI’s alert said. “This leaves them vulnerable to embarrassment, harassment, extortion, financial loss, or continued long-term re-victimization.”
Some companies behind AI image generators have attempted to fight malicious content by restricting thousands of keywords, but ActiveFence’s report indicated that users of dark web pedophile forums trade guides spelling out how to work around keyword restrictions and other safeguards. Some tools, like Stable Diffusion, seemingly rely on open source tools that can make images generated by the tool harder to track and police online, The Post reported. Stability AI, which created Stable Diffusion, doesn’t record all Stable Diffusion images like other AI tools like Midjourney and DALL-E do, The Post reported.
Stability AI told The Post that it bans obscene content creation, cooperates with law enforcement to find violators of its policies, and “has removed explicit material from its training data” to prevent any future attempts at creating obscene content.
However, The Post reported these solutions are imperfect, because “anyone can download” Stable Diffusion “to their computer and run it however they want.” At that point, safeguards, like explicit image filters, can be easily bypassed just by adding a few lines of code—which users can seemingly learn about by reading a dark web guide to using AI tools for this purpose.
The FBI urged parents to be aware of the dangers, limit what minors share, and monitor who is allowed to see minors’ content online. The agency pointed to hashing resources like Take It Down, operated by the National Center for Missing and Exploited Children (NCMEC), to report content and have it removed—but that won’t block any AI-generated images that could be created next by AI tools potentially trained on the image.
More creative solutions are needed as the problem worsens. The Post reported that in the past few months, NCMEC has fielded “a sharp uptick of reports of AI-generated images,” as well as more “reports of people uploading images of child sexual abuse into the AI tools in hopes of generating more.”
Do AI images violate child protection laws?
Two officials from the US Justice Department’s Child Exploitation and Obscenity Section told The Washington Post that AI-generated images depicting “minors engaged in sexually explicit conduct” are illegal under at least two US laws.
One law “makes it illegal for any person to knowingly produce, distribute, receive, or possess with intent to transfer or distribute visual representations, such as drawings, cartoons, or paintings that appear to depict minors engaged in sexually explicit conduct and are deemed obscene.” The other law “defines child pornography as any visual depiction of sexually explicit conduct involving a minor,” including “computer-generated images indistinguishable from an actual minor.”
In the US, there has been at least one recent case where creating “sexually explicit ‘deepfaked’ images” of minors resulted in criminal charges. The Nassau County district attorney in Long Island, New York, announced in April that “a Seaford man,” 22-year-old Patrick Carey, was sentenced to “six months’ incarceration and 10 years’ probation with significant sex offender conditions” after 11 women reported deepfake images uploaded to porn sites that he created using underage photos taken in middle and high school that victims had posted online.
At that time, however, the county district attorney, Anne T. Donnelly, proposed new legislation was needed to protect future victims, noting that “New York State currently lacks the adequate criminal statutes to protect victims of ‘deepfake’ pornography, both adults and children.”
Outside the US, ActiveFence in its May 2023 report noted that the United Kingdom’s Protection of Children Act, Australia’s Criminal Code Act, and Canada’s Criminal Code all appear to include AI-generated images in bans of materials depicting or promoting sexual activity with a minor.
In Canada, there has reportedly so far been at least one arrest. A 61-year-old Quebec man, Steven Larouche, was sentenced to more than three years in prison after being charged with creating AI-generated child pornography, CBC News reported.
Larouche pleaded guilty to creating seven videos by superimposing children’s faces onto other people’s bodies, after arguing for a lighter sentence because no children were harmed.
The judge in that case, Benoit Gagnon, reportedly wrote in his ruling that there was plenty of harm to children. Children whose photos were used were harmed, Gagnon said, as well as children harmed by the growing market for such AI content and any children victimized by anyone whose “fantasies” were “fueled” by the flood of AI images normalizing illegal activity online.
Gagnon noted that this was likely Canada’s first case “involving deepfakes of child sexual exploitation.” A child exploitation expert told CBC that there have only been prior cases using more rudimentary technology, like Photoshop, to generate fake images.
Moving from dark web to social media?
US Justice Department officials told The Washington Post that “hundreds of federal, state and local law-enforcement agents involved in child-exploitation enforcement” will “probably discuss” how to deal with AI-generated CSAM at a national training session this month. But law enforcement officials aren’t the only ones working on finding better solutions to this problem.
AI researchers told The Post that they are also looking for ways to innovate solutions like imprinting a code onto AI images so that creators are always linked to their content. That could dissuade some from creating obscene content, perhaps.
There’s also a much more sensitive proposal that could only proceed with government approval. Companies could train AI models on fake child sexual exploitation images to build more sophisticated detection systems. Without proper safeguards, though, that route could result in companies developing AI models that are even better at producing even more life-like images.
Right now, there are no perfect solutions. Trust and safety providers like ActiveFence work to help platforms detect obscene content that’s still being generated. Their researchers monitor how child predators use AI tools, sharing insights with tech companies, like the latest keywords and prompts used to circumvent content restrictions.
NCMEC’s senior VP of its child exploitation division and international engagement, John Shehan, told Ars that “offenders are currently testing the tools and attempting to create material.” He said that NCMEC anticipates that AI-generated CSAM will become a bigger problem in the future and likely won’t remain circulating predominantly on the dark web for long.
“I’m not aware of AI-generated CSAM on the open net at the moment but feel confident it’s just a matter of time until it is,” Shehan told Ars.
ActiveFence senior child safety researcher Guy Paltieli told Ars that moment may be happening now. Already his team is seeing “content migrating from dark web forums to the clear net,” finding “numerous examples” on “mainstream social media platforms.”
“In some cases, it is used to advertise messaging app groups where more explicit content can be found, while in other cases, it is shared to teach users how to generate similar content,” Paltieli told Ars.
Until laws are clear on how society should deal with child sexual exploitation deepfakes, it seems that the obscene AI images will keep spreading, potentially increasing demand while AI tools potentially get trained on more illegal images that users keep uploading.
Shehan told Ars that “NCMEC has received at least a dozen CyberTipline reports submitted by companies regarding individuals attempting to upload known CSAM” into AI image generators to influence the kind of content that can be created. Paltieli said one result of companies moving too slowly to solve the growing problem is the recent appearance of even more explicit AI-generated images.
“In the past two months, we have seen an escalation in the type of content being shared on social media platforms, which became much more explicit than what we used to see in the past,” Paltieli told Ars.