AI chatbots are being exploited by pedophiles to generate child-sex abuse material | #childsafety | #kids | #chldern | #parents | #schoolsafey

By Miles Dilworth, Senior Reporter For Dailymail.Com

13:31 02 Jul 2023, updated 13:31 02 Jul 2023

  • Tech giants are letting amateur coders to rip out safeguards from their chatbots
  • But thousands of AI-generated child abuse images now litter dark web forums 
  • Predators share ‘pedophile guides’ to AI and sell their pornographic material 

‘Thank you so much for sharing,’ says a member of an online forum, discussing tips on how to create your own version of artificial intelligence ‘chatbot’ ChatGPT. 

‘Are there any plans to share your previous models?’ 

It sounds innocent enough, but this is just one of dozens of dark web pedophile groups exchanging advice on how to build ‘uncensored’ chatbots that can produce reams of child porn.

Chatbots are computer programs that use artificial intelligence (AI) to interact with humans. They have become increasingly sophisticated in recent years, with users able to ask them to generate lengthy pieces of text, or create life-like images. 

And now predators are exploiting advances in the technology to create horrific, highly-realistic child abuse material. 

It’s been made possible after some tech firms released the code used to create their AI programs to to the public.

Their aim was to democratize the technology, but they have also opened Pandora’s box. 

Tech firms have released the codes behind their revolutionary chatbots to the public, meaning amateur tech heads can rip out safeguards that prevented them from being abused
It has led to a spike in AI-generated child sex abuse material, with 80 percent of pedophiles on one dark web forum saying they had or were planning to use AI to create child porn
These forums have become a hotbed for predators to share disgusting material they have made using the ‘uncensored’ bots

‘Uncensored’ AI

Chatbots made by firms such as OpenAI, Microsoft and Google come with rigorous safeguards, designed to prevent them from being used to produce malicious content.

But in February, Meta – the tech giant which owns Facebook, Instagram and WhatsApp – decided to make their code public, allowing amateur tech heads to rip out these filters.

Smaller tech companies followed suit. 

Supporters of this ‘open-source’ AI say it blasts a hole through corporate control and accelerates innovation by making this powerful technology available to entrepreneurs, academics and scientists.

Meta – owned by Mark Zuckerberg – claims its move ‘is a positive force to advance technology’.  

Some models have been used to discover new pharmaceuticals or pesticides, for example, but they are also open to abuse.

YouTubers with more than 100,000 subscribers have posted tutorials explaining how to create ‘uncensored AI’, demonstrating how these new chatbots will answer questions like ‘how to make a bomb’ or ‘how to make meth’, that other models would refuse to answer.

Others are being used to satisfy sexual desires.

One video with around 180,000 views begins with an AI-generated voiceover asking: ‘Do you have a girlfriend, or a boyfriend?’

It tells viewers not to worry, ‘because now we have PygmalionAI’, an uncensored chatbot ‘fine-tuned’ for ‘hot roleplay’.

Some chatbots created using modified versions of Meta’s chatbot model are able to carry out graphic rape and abuse fantasies.

This YouTube video, which has around 180,000 views, teaches viewers how to use an uncensored chatbot to satisfy sexual fantasies such as ‘hot roleplay’
Mark Zuckerberg’s Meta was the first tech giant to release the model behind its chatbot
The company said it believed ‘open-source’ AI is ‘a positive force to advance technology’


A pedophile’s guide to AI

But it is the growth of AI-generated child pornography that has caught the eye of the FBI.

The agency warned earlier this month that it had detected a spike in the number of ‘malicious actors’ using AI to turn photos of children into ‘sexually-themed images that appear true-to-life’.

Uncensored chatbots allow predators to create this abusive content at greater speed and volume than using ‘deepfakes’, for example, because they are easier to use and can quickly generate multiple images from a single instruction.

Thousands of AI child-sex images now litter the dark web amid what analysts have described as a ‘predatory arms race’.

Computer-generated child-sex abuse material (CSAM) has almost tripled over the past year on one forum, according to online security firm ActiveFence.

Some members offer a free sample of images, while dangling the carrot of thousands more should fellow perverts be willing to pay the right price.

One predatorial post from February this year states: ‘Hi all, I’m sharing some of my best AI generated cp [child porn] pics here for you all to enjoy.

‘Comparing to most others I’ve seen here, I think these are quite good, good enough to maybe convince one that they are real images of real children, but these are all fakes, no children involved.’

Child safety experts say that although the pornographic material may not show images of real children, it normalizes child abuse, while the content is created via the artificial manipulation of genuine underage pictures.

Pedophiles share child sex abuse images they have created using uncensored chatbots
Online security firm ActiveFence says the volume of child sex abuse material shared on one dark web forum jumped by 172 percent in the first quarter of 2023
They share tips on how to dodge safeguards that are designed to prevent abuse
Techniques include using certain words or phrases that won’t be blocked by the chatbot
One guide shares a list of specific prompts that have been trialed and tested to produce child porn. The keywords have been redacted by online security firm ActiveFence

In these instances, chatbots are ‘retrained’ by feeding them with imagery of children’s body parts, for example, that can then be used to construct ‘fake’ child porn.

Forum members openly appeal for ‘tips on finding CP trained models’ and the proliferation of such nefarious knowledge has helped drive the spike in content.

A poll of 3,000 members on one pedophile group found that around 80 percent had or planned to use chatbots to create child porn.

Around a fifth said they would give it a try after reading tips shared on the thread on how to go about it.

One method, disclosed in a PDF titled ‘Pedophile’s Guide to [redacted platform name]’ tells users how to use particular words and phrases that allow them to manipulate certain models.

The guide, discovered by ActiveFence, provides predators with a ‘magic word’ that will generate ‘”our kinda stuff” without it getting censored’.

It adds: ‘Oddly, “children” seems to get censored, and “girl” sometimes produces adults…[Redacted word] seems to work fine.’


READ MORE: What is ChatGPT?

Everything you need to know about the new AI chatbot that garnered more than one million users in its first WEEK thanks to its eerily human-like responses… 

 Closing Pandora’s box

There is no suggestion that the chatbot model produced by Meta has been used to create child porn.

But child-safety experts said many appear to rely on the open-source tool Stable Diffusion, run by Stability AI, which can be run without restrictions.

The model’s license tells users not to use it ‘to exploit or harm minors in any way’, but its safeguards have been found to be easily bypassed.

Stability AI has previously said it ‘prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM (child sexual abuse material)’.

But what is legal or illegal in this realm is unchartered territory. Justice Department officials have said that creating child abuse images that depict children who don’t exist still violates federal laws, although no one is believed to have been charged for such an offense. 

It is true that even closed-source chatbots could be manipulated to produce harmful content, including child porn.

But Guy Paltieli, senior child safety researcher at ActiveFence, says ‘the vast majority’ of CSAM is produced using uncensored models.

He has, however, been encouraged that the creators of some of these tools have since improved their codes to make it harder for them to be manipulated by sexual predators.

‘I don’t think the problem is the open-source models themselves, it is about how to train them to prevent it,’ he adds.

His concern appears to be shared by US senators Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.), who earlier this month asked Meta what steps it was taking to prevent ‘wrongdoing and harms’ arising from its open-source AI.

Big tech has opened its AI-generated Pandora’s box, but can it be closed?


Source link

National Cyber Security