CV NEWS FEED // Digital security experts and the FBI are warning parents not to post images of their children on social media as increased access to artificial intelligence (AI) leads to skyrocketing numbers of child sexual abuse materials (CSAM).
Generative AI programs are now advanced enough for almost anyone to use, making it easy for criminals to take publicly-posted photos of minors and tweak them to make realistic-looking explicit material.
As the amount of CSAM increases, child protection organizations and law enforcement agencies are overwhelmed by the number of child sex abuse reports—and some worry that today’s law enforcement system isn’t technologically advanced enough to keep up with online sexual predators.
CSAM has become such a problem in the US that over 50 attorneys general petitioned Congress last week to institute a national commission for studying AI-generated CSAM. They also called for updated legislation to protect children and stay abreast of CSAM.
“While internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes prosecution more difficult,” the attorneys general wrote to Congress. “We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”
Legislators throughout the U.S. have also introduced several bills in the last year to protect children from CSAM. Sen. Dick Durbin (D-Ill.) put forward the STOP CSAM Act of 2023 in April, which would allow victims of CSAM to sue social media platforms that host the sexually explicit content. In August, Reps. Ann Wagner (R-Mo.) and Sylvia Garcia (D-Texas) introduced the Child Online Safety Modernization Act of 2023, which increases the amount of required information submitted to NCMEC’s tip line. About half of the reports submitted to NCMEC do not have sufficient information to track a child abuser, leading to wasted time, confusion, and inefficiency.
The concerns of the attorneys general came just a few months after the FBI issued an urgent public service announcement, cautioning parents and guardians against posting photos of their children online.
“The FBI is warning the public of malicious actors creating synthetic content (commonly referred to as ‘deepfakes’) by manipulating benign photographs or videos to target victims. Technology advancements are continuously improving the quality, customizability, and accessibility of artificial intelligence (AI)-enabled content creation,” the announcement said.
“The FBI continues to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content. The photos or videos are then publicly circulated on social media or pornographic websites, for the purpose of harassing victims or sextortion schemes,” it continued.
Sextortion refers to when a predator pretends to be a teen in order to gain sexual photos from minors. Once the images are obtained, a criminal can threaten to publish the photos in order to demand money or sex from the teens.
Yaron Litwin, a digital safety expert, said that deepfakes and sextortion can cause emotional, mental, or physical harm to minors. Once cyber criminals create deepfakes, which is an incredibly fast process thanks to generative AI, they can use the threat of publishing the image to extort money or sex from minors. In situations like these, according to Litwin, minors can choose between public humiliation, meeting the criminal’s demands, or even committing suicide to escape.
According to the National Center for Missing and Exploited Children (NCMEC), the tip line for reporting child sexual exploitation received over 32 million CSAM reports in 2022, which is around 87,000 reports per day and an 82% increase from 2021.
Parents Together, an organization focused on protecting the family, reported that one in three children will have an unwelcome sexual experience online before they reach 18. According to a report from Thorn, another organization fighting online child sex abuse, one in six children have shared explicit images online, and one in four children say that posting sexually explicit content is a normal experience.
To combat this, NCMEC and other organizations filter through sexual content reports and prosecute identifiable offenders—but the overwhelming amounts of flagged content make the filtering process take more time than victims of CSAM can afford.
Netspark, a digital tech company that promotes online privacy and safety, is on the front lines of the battle against CSAM. One of the company’s latest releases is a program called CaseScan, which uses AI to distinguish between AI-generated CSAM and actual child sexual abuse.
Litwin, who is also chief marketing officer at Netspark, told the Epoch Times that as generative AI became more widespread at the beginning of 2023, the amount of online CSAM also went up. As CSAM increases and law enforcement agencies are unable to keep up, Litwin said that it’s crucial to use “good” AI to fight “bad” AI.
Litwin said that’s where CaseScan, his company’s AI filtering system, comes in.
According to the Epoch Times, CaseScan allows investigators to identify AI-generated CSAM much faster, and it also improves the mental health of the human investigators who usually have to view CSAM all day.
Litwin also said that AI-generated CSAM still constitutes sexual abuse, and is able to be prosecuted under U.S. law. However, as of yet there is no precedent for an AI-generated CSAM case.
“I think today it’s hard to take to court, it’s hard to create robust cases. And my guess is that that’s one reason that we’re seeing so much of it,” he said.
Parents: Protect Your Children
Litwin recommended that parents be more cautious about what they post online, especially when it comes to photos of their children. But Litwin also said that one of the best ways to protect minors from CSAM is to have open, direct communication between parents and children about online predators and posting photos.
“One of our recommendations is to be a little more cautious with images that are being posted online and really try to keep those within closed networks, where there are only people that you know,” Litwin said. “[And it’s important to be] communicating with our kids about some of these risks and explaining to them that this might happen and making sure that they’re aware.”