Meta and other AI companies to answer for child safety | #childsafety | #kids | #chldern | #parents | #schoolsafey


  • The US Attorneys General has outlined what AI companies must do when it comes to online child safety.
  • They made specific reference to reports surrounding Meta and its controversial policies for how AI chatbots interact with kids.
  • The Attorneys General said technology companies “will be held accountable”.

Earlier this month we covered a report regarding Meta and how its policies related to AI chatbots proved highly controversial, especially when it came to how it interacted with children, or was prompted to detail issues on race in the United States.

As it turns out, we were not the only ones concerned with the findings of the report, as now US Attorneys General of 44 jurisdictions have all written in a letter to the CEOs of several big technology companies handling AI-powered services and solutions.

More specifically, they have written to Anthropic, Apple, Chai AI, Character Technologies, Google, Luka, Meta, Microsoft, Nomi AI, OpenAI, Perplexity, Replika, and XAi.

“Recent revelations about Meta’s AI policies provide an instructive opportunity to candidly convey our concerns. As you are aware, internal Meta Platforms documents revealed the company’s approval of AI Assistants that ‘flirt and engage in romantic roleplay with children’ as young as eight. We are uniformly revolted by this apparent disregard for children’s emotional well-being and alarmed that AI Assistants are engaging in conduct that appears to be prohibited by our respective criminal laws. As chief legal officers of our respective states, protecting our kids is our highest priority,” the letter explained.

“Exposing children to sexualized content is indefensible. And conduct that would be unlawful – or even criminal – if done by humans is not excusable simply because it is done by a machine,” it added.

As AI becomes more pervasive, it looks like the legal fraternity is weighing on the matter, specifially when it comes to online child safety. It also appears as if greater scrutiny will come to the fore should reports like the aforementioned one involving Meta from earlier this month becomes a more frequent occurance.

To that end, the letter pointed out that

“Big Tech, heedless of warnings, relentlessly markets the product to every last man, woman, and child. Many, even most, users employ the tool appropriately and constructively. But some, especially children, fall victim to dangers known to the platforms,” the letter highlighted.

“You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.,” it ended emphatically.

Whether big tech companies will indeed be held accountable remains to be seen, especially as many have grown increasingly friendly with the Trump administration, and efforts to “win” the AI race have been prioritised over child safety.

[Image – Photo by Shutter Speed on Unsplash]

————————————————


Source link

National Cyber Security

FREE
VIEW