Info@NationalCyberSecurity
Info@NationalCyberSecurity

An update on our child safety efforts and commitments | #childsafety | #kids | #chldern | #parents | #schoolsafey


Combatting child sexual abuse and exploitation (CSAE) is profoundly important work for Google. We’ve invested significant resources in building detection technology, training specialized teams and leading industry efforts to stop the spread of this harmful content.

Today, we’re announcing our commitment to the Safety by Design Generative AI principles — developed by Thorn and All Tech is Human. These mitigations complement our existing work to prevent the creation, dissemination and promotion of AI generated child sexual abuse and exploitation. We’re proud to make this voluntary commitment — alongside our industry peers — to help make it as difficult as possible for bad actors to misuse generative AI to produce content that depicts or represents the sexual abuse of children.

This step follows our recent announcement that we’ll be providing ad space to the U.S. Department of Homeland Security’s Know2Protect campaign — and increasing our ad grant support to National Center for Missing and Exploited Children (NCMEC) for its 40th anniversary and to promote their No Escape Room initiative. Supporting these campaigns is critical to raising public awareness as well as giving children and parents the tools for identifying and reporting abuse.

Protecting children online is paramount and as AI advances, we know this work can’t happen in a silo – we have a responsibility to partner with others in the industry and civil society to make sure the proper guardrails are in place. In addition to these announcements, today we’re sharing more details on our AI child safety protections and recent work alongside NGOs.

How we combat AI-generated CSAM on our platforms

Across our products, we proactively detect and remove CSAE material through a combination of hash-matching technology, artificial intelligence classifiers, and human reviews. Our policies and protections are designed to detect all kinds of CSAE, including AI generated CSAM. When we identify exploitative content we remove it and take the appropriate action which may include reporting it to NCMEC.

In line with our AI principles, we’re focused on building for safety and proactively implementing guardrails for child safety risks to address the creation of AI-generated child sexual abuse material (CSAM), including:

  • Training datasets: We are integrating both hash-matching and child safety classifiers to remove CSAM as well as other exploitative and illegal content from our training datasets.
  • Identifying CSAE-seeking prompts: We utilize our machine learning to identify CSAE-seeking prompts and block them from producing outputs that may exploit or sexualize children.
  • Adversarial testing: We conduct adversarial child safety testing across text, image, video and audio for potential risks and violations.
  • Engaging experts: We have a Priority Flagger Program where we partner with expert third parties who flag potentially violative content, including for child safety, for our teams’ review.

How we collaborate with child safety experts and industry partners

Over the past decade, we’ve worked closely with child safety experts including NGOs, industry peers, and law enforcement to accelerate the fight against CSAE content. Our latest support for NCMEC builds on past collaborations, including our development of a dedicated API to prioritize new reports of CSAM and support the work of law enforcement.

Similarly, we have a specialized team at Google, that helps identify when flagged content indicates a child may be in active danger. This team then notifies NCMEC of the urgent nature of the report to route it to local law enforcement for further investigation. We’re proud that this work has helped lead to successful rescues of children around the world.

These collaborations continue to inform our support for industry partners and innovations like our Child Safety Toolkit. We license the Toolkit free of charge to help other organizations identify and flag billions of pieces of potentially abusive content for review every month.

How we continue to support stronger legislation

We are actively engaged on this topic with lawmakers and third party experts to work towards our shared goal of protecting kids online. That’s why — this year — we’ve announced our strong support for several important bipartisan bills in the United States including the Invest in Child Safety Act, the Project Safe Childhood Act, the Report Act, the Shield Act and the STOP CSAM Act.

This work is ongoing and we will continue to expand our efforts, including how we work with others across the ecosystem, to protect children and prevent the misuse of technology to exploit them.

————————————————


Source link

National Cyber Security

FREE
VIEW