Earlier this week came the news, both disturbing and empowering, that three teens had filed a class-action lawsuit against Elon Musk’s xAI, alleging that its Grok tool was used, with their photos, to make nonconsensual nude and sexually explicit images of them. It was “heartbreaking,” said the mom of one of the girls, to watch her daughter’s senior year marred by such a horrifying violation.
Now comes the revelation, through a survey out of George Mason University in Virginia, that more than a third of teens have had at least one such sexualized AI-generated image of themselves created (and sometimes distributed) by someone else without their consent.
In addition, found survey leader and digital forensics expert Chad Steel, more than half of those surveyed reported that they themselves had used AI tools to create at least one nude image.
“Teens are no longer just digital natives but AI-natives,” he said in a press release. “‘Nudification’ and GenAI apps are their new ‘sexting,’ only with more challenging issues surrounding consent.”
The findings were published Wednesday in the journal PLOS One.
For his research, Steel analyzed online survey results from 557 anonymous U.S. teens aged 13 to 17 (with parental consent) to find out about their experiences with creating, sharing, and viewing sexualized generative AI (GenAI) images.
While prior research has suggested that creating and distributing sexualized images (with or without the use of AI) has become normalized among U.S. teens — and that incidents of such AI misuse by adolescents has risen, causing permanent life disruptions — the overall prevalence has been unclear until now.
Steel’s research found some staggering numbers, including that 55.3% had used “nudification” tools to create at least one image of themselves or others, and that 54.4% had received such an image. It also found that 36.3% of those surveyed had at least one sexualized, nonconsensual GenAI image of themselves made by someone, and that 33.2% had such an image of themselves distributed without their consent.
While the results were largely similar across demographic categories, including age, and widespread for both male and female participants, male participants reported higher rates of creating and distributing sexualized GenAI images of themselves and others, whether consensually or non-consensually.
From the findings, Steel concludes that better education on the “healthy and safe usage of GenAI technology” is needed — and that it needs to start before the age of 13. He also calls for better training for law enforcement on handling both offenders and victims, new legislation related to such GenAI child sexual exploitation material (CSEM), and more research into the topic.
