Barely days after the precedent-setting verdicts around Meta and YouTube’s efforts to promote addictive behaviour among children, a group of digital safety advocates have now petitioned Google CEO Sundari Pichai and his colleague at YouTube Neil Mohan to change the latter’s policies in order to cut down AI slop.
The demand came from a coalition of US-based organisations and child development experts who sought an outright ban for the “Made for Kids” content that is generated by AI. Of course, as is their wont in the past, YouTube came up with a standard, sanitised response that claimed they maintained “high standards” for the content on YouTube Kids.
The signatories to the letter have argued that the absence of evidence that AI slop was safe for children as well as the potential of these videos to “mesmerise and harm kids”, Google should be taking swift action to protect children on its platforms. Readers would be aware that YouTube recently partnered with Gen AI studio Animaj that specialises in AI-led kids’ content.
The groups also announced a public petition demanding YouTube implement several safety policies to immediately address the proliferation of AI slop directed towards children.
These include clearly labelling AI-generated content on YouTube, barring such content from YouTube Kids, prohibiting child-focused videos that are AI-generated, prohibiting algorithmic recommendations of AI-content for users below 18 years, having a toggle switch in parental controls to stop kids searching for it and cutting all investments in AI-generated kids’ videos.
Nonprofit Fairplay, an organisation focusing on child safety, is helming the latest public opposition to Big Tech’s all-out AI play. The memorandum is signed by several organisations such as the American Federation of Teachers, the National Black Child Development Association, and Mothers Against Media Addiction.
Experts and authors such as Jonathan Haidt, who wrote the highly popular and often cited book “The Anxious Generation” are among others who have pushed for this digital reform. Furthermore, these groups have also referred to growing concerns around exposure to AI content among children and teens and how they can distort perception of reality, result in cognitive overload and displace real world activities necessary for development.
Eminent behavioural paediatrician Jenny Radesky was another signatory who said YouTube was going overboard with its engagement metrics. First they introduced Shorts with Made for Kids content without understanding its impact on young viewers. Now AI slop is competing for their attention on those very feeds, she says.
A YouTube spokesperson claimed that AI-generated content was limited in YouTube to a small set of “high-quality channels”. In addition, parents are allowed to block such channels and the company prioritises transparency when it comes to AI content, labelling it with their own AI tools, and requiring creators to disclose realistic AI content.
“We’re always evolving our approach to stay current as the ecosystem evolves,” the YouTube spokesperson concluded, leaving us in no doubt that such prefabricated responses to actual problems is what makes Big Tech appear heartless.
Do Big Tech companies never learn their lessons?
That YouTube can respond in such a cavalier manner that almost discounts the petitioners at this juncture is what is both shocking and surprising. Barely a week ago, a Los Angeles court had found them and Meta guilty of causing addiction by intentionally hooking the plaintiff as a child and causing her to develop anxiety, body dysmorphia, and suicidal thoughts.

In fact, another court at Santa Fe in New Mexico even fined Meta $375 million dollars as part of a lawsuit filed by the district attorney claiming civil penalties while holding the company guilty of misleading consumers about the safety of its platforms, but how it had instead endangered the mental health of children.
Interestingly, the jury in both courts had a similar point of view when it comes to the proof of negligence against the two Big Tech companies. Both noted negligence in design and a failure to warn of those risks as key reasons. However, in the Santa Fe case, it was the testimony of a former Meta employee that tilted the jurors strongly against Meta.
AI slop for kids is the pits as far as Big Tech is concerned
In the latest petition to curtail AI slop on YouTube Kids, the petitioners have said seemingly benign animations could turn out to be sexual or violent in nature. “Young people do not want to be targeted by this type of experience by YouTube’s algorithm. After the recent verdicts, one would think YouTube would finally take its responsibility to its young users seriously,” says Sebastian Mahal, co-chair of a youth-led lobby coalition ‘Design for Us’.
In fact, mental wellness experts have repeatedly raised the alarm over how Instagram and YouTube push content via their algorithms to enhance engagement, considered by the Big Tech companies to be the best metric to drive profits. In fact, some experts noted that YouTube should have already pulled down millions of AI-generated ‘Made for Kids’ videos by now.
Most of these videos are designed to entrance and entrap young children, leading to more screen time and removing them from activities that they need to perform offline for good physical and mental wellness.
However, the company appears to be taking a 180-degree turn, if recent events are anything to go by. The tech giant announced a $1 million investment early March in Animaj – an AI-led children’s entertainment company. Under the deal, Animaj would get exclusive access to Google’s AI tools such as Veo and Imagin
While most of the videos generated by the company attracts families and claim more than 22 million subscribers for its affiliate channels, experts argue that most of these may be built around nursery rhymes with kid-friendly characters, they are still about mesmerising children – yet another way of driving engagement.
Also Read:
And this is where the American Academy of Paediatrics has sent out a distinct warning to parents. They note that AI-generated content aimed at mesmerising children with stimulating visuals and music encourages them to choose longer-form videos over short-form content. Additionally, once used to such content, they shun evidence-based educational content with a slower pace and frequent interactions such as call-and-response queues.
In short, the experts argue that content that mesmerises children only displaces the time they need to spend playing, socialising, and using all their senses during a period in which infants are still wiring their brains.
————————————————
