(844) 627-8267
(844) 627-8267

AI-generated child sexual abuse revealed in harrowing report | #childsafety | #kids | #chldern | #parents | #schoolsafey


The Internet Watch Foundation (IWF) has been investigating its first reports of AI-generated child sexual abuse material (CSAM).

In total, 20,254 AI-generated images were found to have been posted to one dark web CSA forum in just a month.

The initial investigations uncovered a world of text-to-image technology which could produce CSAM with a high degree of realism, raising alarm bells among watch groups as it can potentially jeopardise future investigations and cause further harm to children.

The harrowing report detailed the images perpetrators are able to produce from simple text-to-image AI models, which create fast and accurate results.

Previous child victims are having new images of them recreated, and celebrities are being de-aged using the technology and depicted being sexually abused as children.

The technology is mostly derived from open source AI models which developers can access to create bespoke models designed to bypass the guardrails of generative AI models, which typically have rules against the production of illicit and even legally pornographic material further.

Still, many of these models have already been trained on pornographic material, but Stable Diffusion 2.0+ attempted to ensure that the data set it is trained on is much more filtered so it is harder to make adult content.

The issue is confounding lawmakers even after the introduction of the Online Safety Bill to government.

While the images are AI and therefore do not depict real-life abuse, IWF has said that the images are criminal under one of two UK laws which are: The Protection of Children Act 1978, and The Coroners and Justice Act 2009.

These laws criminalise the “indecent photograph or pseudo-photograph of a child” as as any “prohibited image of a child” including cartoons, drawings, animations, or similar.

Under these laws, AI-generated images of child abuse would be found to be illegal, as well as those purchasing, generating, or distributing them.

The IWF found that perpetrators can legally download everything they require to generate these images and do so completely offline, avoiding detection.

One of the major concerns for authorities is that most AI child abuse material is realistic enough to be treated as ‘real’ CSAM.

Text-to-image technology is advancing, and the most convincing AI material is indistinguishable from real CSAM, which can confound investigators trying to help children who are being actively abused.

Further, even though these images do not depict active abuse, they have increased the potential for the re-victimisation of known child sexual abuse victims, as well as the victimisation of famous children and children known to perpetrators.

The production of AI child sexual abuse material also opens another avenue for perpetrators to profit from abusing children.


Recommended reading


The report follows a joint statement issued by the UK and the US, pledging to work together fight the development of these disturbing images and their pernicious use of AI.

The sickening report does spur a call to direct action, however: the creation and distribution of guides to generate AI CSAM is not currently an offence under UK law, but could be made one.

Currently, the legal status of AI CSAM models is a complicated question, but one that may have to be addressed at the upcoming AI Safety Summit in light of this harrowing report.

“Everyone needs to understand the scale of the marketplace that is driving these changes. This is organised crime and it is worth a significant amount of money which is why it is thriving,” Annabel Turner of CyberSafe Scotland commented on the disturbing report.

“Responding to this is going to require something from everyone… and many of the issues surrounding this are complex. But first and foremost we need urgent leadership around the development of responsible and ethical AI and platforms and developers to harness all tech solutions to prevent these images being made or shared.

“There will need to be further conversations about likely regulation, individuals to be continuously educated on what to do when these images surface on the clear web and on steps they can take to reduce personal risk, We urgently need the minds of those in leadership and tech to be focused on solutions.

“I hope that the Scottish Government will push for this to be front and centre at the UK Government Bletchley Park AI summit at the beginning of November.”

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window,document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘485941541567778’);
fbq(‘track’, ‘PageView’);*/
(function(d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s); js.id = id;
js.src = “//connect.facebook.net/en_GB/sdk.js#xfbml=1&version=v2.8”;
fjs.parentNode.insertBefore(js, fjs);
}(document, ‘script’, ‘facebook-jssdk’));

————————————————


Source link

National Cyber Security

FREE
VIEW