(844) 627-8267
(844) 627-8267

AI-driven internet worsens child sex abuse image market | #childsafety | #kids | #chldern | #parents | #schoolsafey


Child sex abuse materials (CSAMs) are already difficult to source and curtail by dedicated organizations. Diffusion models (a form of AI) are now making prevention and efforts to save abused children even harder.

Reading Time: 5 minutes

In the past week, recent reports and investigations have revealed a damning state of affairs for the role of so-called artificial intelligence (AI) algorithms in the child sex abuse material (CSAM) market. While machine learning software, including large language models like ChatGPT, has been the subject of great scrutiny as corporations test the limits on replacing human labor with content generated by scraping existing human output, the impact of these programs on human safety has received less media prominence. Until now.

On June 19, The Washington Post offered a troubling report from nonprofit child-safety group Thorn, which has seen month-over-month growth in the artificially generated CSAM market since last fall. Like other such data-synthesizing software, “diffusion models” create images drawn from existing content—that is, images of real children subjected to sexualized abuse—to churn out blended renderings of extreme violence, including to infants and toddlers.

Today, the BBC shared its own research into the growing phenomenon, which uses an open-source version of an application called Stable Diffusion to produce photo-realistic images based on keyword prompts. Freelance journalist Octavia Sheepshanks had been investing this issue for months, in conjunction with the National Society for the Prevention of Cruelty to Children (NSPCC), which has been calling for tech companies to take a more proactive role in blocking such images.

Already, synthetic CSAM has made its way to mainstream platforms, including pay-to-view content on the fundraising site Patreon and on a popular Japanese artists’ hub called Pixiv. Although Patreon has emphasized a zero tolerance policy, Pixiv operates under much more relaxed Japanese legislation around sexualized violence involving children. Pixiv has taken steps to heighten monitoring in recent months, but both sites, and others like them, face an uphill battle, because the very nature of synthetic content is such that the internet is being flooded with more such imagery than current defense mechanisms can hope to match.

In the UK, the National Police Chief’s Council is stepping up enforcement efforts and expressing outrage at private companies financially benefiting from such content. Meanwhile, the government is still struggling to advance an “Online Safety Bill”. The version presented last fall would establish a greater duty of care for tech companies: not only against images of child sex abuse but also in live-streaming spaces, and other forums where children are at risk of grooming.

However, this legislation has met with criticism of the attempt to restrict “lawful but harmful” speech. It also poses a logistics nightmare. Even if such a law does come into effect, how do we actually keep up with so much harmful content?

The problem with this new era of CSAM

The damage that synthetic CSAM does is threefold:

First, the sheer volume of images created by blending real materials depicting sexually abused children makes it easier for pedophiles to hide their victims even from organizations working full-time on their identification and rescue.

Second, the glut of synthetic images makes it harder to identify any CSAM that is not artificially created. More time must now be taken to track down and confirm the authenticity even of traditional forms of abusive materials.

Third, even if synthetic CSAM is found to be made up of entirely fabricated content, such images are still treated the same under many state laws, as illegal to possess, publish, or transfer. This is because of the strong risk of escalation from fantasy to enactment among people who cultivate sexual responses to depictions of child abuse.

But this last issue brings us into complicated legal terrain, because it compels us as a society to face up to our own immaturity: the ongoing struggle between personal conviction in one’s right to do, watch, or say anything they want, and the clear need for checks and balances on individual action that underpins our commitment (ostensibly) to living together in a society.

In the US, litigious argument has sprung up around the idea that, if no real children were involved in the development of such images, synthetic CSAM shouldn’t be considered as having broken any child abuse laws. This is in keeping with cultures that take similar umbrage with any hate speech legislation: arguing, for instance, that simply calling for harm to a given group from a public platform does not make one culpable in someone else’s choice to answer that call.

READ: Stochastic terrorism and the Colorado Springs Club Q shooting

As The Washington Post reported on June 19, Justice Department officials who work in this field couldn’t recall a case of someone being charged solely for the possession of synthetic images, so this is currently a hypothetical site of outrage. What’s not hypothetical is the amount of time that investigators now have to spend trying to ascertain if synthetic CSAM has been developed with images of real children or not.

And that’s just the tip of the iceberg, when it comes to time expressly wasted on the wrong sites of harm to children in our societies.

A more mature approach to child abuse and new tech

In a world where children are by far more likely to be sexually abused by family members and other people already known to them (through school, sports, and church), we continue to perpetuate “stranger danger” as the most pressing issue when it comes to protecting children. Just… not when it comes to US school shootings, and not when it comes to slowing down our integration of shiny new technologies to allow regulation to catch up.

In the US, right wing media has effectively gamified child abuse by diverting attention from the active sexual grooming of children in many religious spaces to heighten paranoia about queer and trans people as the “real” predators. But what we really need is a united front against the nature of contemporary internet cultures, which are predominantly driven by private enterprises only very recently facing any meaningful consequences for ethical and legal lapses through greater public oversight.

In 2019, UNESCO put out a key report on this matter: “Child online safety: Minimizing the risk of violence, abuse, and exploitation online”. Though published well before the latest storm of synthetic CSAM, and other problems posed by so-called AI, the main arguments and suggestions remain as relevant as ever. We still require a “coordinated and global approach” to child sexual abuse, and an open-source environment for solutions-sharing to combat how much tech is already out in the world, accelerating challenges for CSAM detection, filtration, and sourcing.

The UN has set an ambitious sustainability goal of “ending” child abuse by 2030, in part by increasing global collaboration on standards setting and pushing for greater digital literacy and abuse prevention campaigns locally.

Is this UN goal realistic? Not even close.

However, this struggle for universal human dignity is only exacerbated when our approach to new technologies distracts us from its negative consequences.

Last year the Broadband Commission for Sustainable Development produced a framework for policy makers, on “AI and Digital Transformation Competency”, that concerned citizens would do well to familiarize themselves with, so that they understand how much their governments are keen to onboard AI-driven technologies. Only by recognizing where the momentum currently lies can we push officials to prioritize tackling the dangers to children’s safety in the process.

Mostly, though, we have to decide how seriously we’re ready to take the issue of child abuse. Is it something we only care about when we can leverage “protecting children” for political gain? Or only so long as preventing sexual abuse doesn’t infringe on our ability to imagine ourselves completely “free” in our societies when it suits us?

Or do we actually have it in us to face the shiny new world these last few decades of rapid technological growth have brought us—and decide soberly and collectively how best its tools should serve us going forward?


Source link

National Cyber Security