On March 31, Australia’s online safety watchdog, eSafety, announced it is considering legal action against Facebook, Instagram, Snapchat, TikTok, and YouTube over systemic failures to block users under 16.
Australia’s eSafety Commissioner, Julie Inman Grant, released her first compliance report since the ban took effect on Dec. 10 last year, stating that five million Australian accounts had been shut down.
However, the report also found that many children were still able to maintain their accounts, create new ones, and bypass age-verification systems.
eSafety highlighted “poor practices,” noting that some platforms undermined their own safeguards by allowing unlimited age-verification attempts and even nudging underage users to circumvent the system.
In a statement, Inman Grant said her office had “significant concerns about compliance,” with five out of the ten platforms reviewed failing to take “reasonable steps” to remove children’s accounts.
Success
You are now signed up for our newsletter
Success
Check your email to complete sign up
The five platforms not under investigation are Reddit, X, Kick, Threads, and Twitch.
Communications Minister Anika Wells said the main platforms under scrutiny are doing the “absolute bare minimum” to comply, warning that their lack of effort risks undermining Australia’s youth social media laws.
Experts say Australian courts will ultimately determine what constitutes “reasonable steps” for compliance. Platforms that fail to meet the standard could face fines of up to 49.5 million Australian dollars ($33 million).
Meta and Snap Inc.—owners of Facebook and Snapchat, respectively—have affirmed their commitment to complying with the law. Meta told The Associated Press, “We’ve also been clear that accurately determining age online is a challenge for the whole industry.”
Snap Inc. reported that 450,000 accounts had been locked as part of enforcement efforts, adding that it supports the “underlying goal of improving online safety for young Australians.”
TikTok and Alphabet Inc., the parent company of YouTube and Google, did not respond to requests for comment.
READ MORE:
A failed experiment?
According to The Guardian, around seven in ten Australian children remain on social media, with no significant reduction in cyberbullying or image-based abuse reported.
The outlet also noted that experts in digital safety and youth advocacy were largely ignored during the policy’s development, a concern echoed by Commissioner Inman Grant.
Reports suggest the government had limited evidence the ban would succeed before passing the legislation, yet has largely placed responsibility on tech companies for its effectiveness.
Guardian writer Samantha Floreani argued the ban may not only be ineffective but could also increase online risks. While verifying user identity might be more reliable than facial recognition, it could expose users to greater risks from data breaches and hacking.
Floreani added that age-gating fails to address the internet’s core problems, including exploitative business models, scams, misinformation, AI-generated content, and other harmful material that continues to put children at risk.
Lisa Given, an information sciences expert at the Royal Melbourne Institute of Technology (RMIT), said courts will likely focus on defining what “reasonable steps” look like in practice.
“If a tech company has said, ‘Look, we’ve implemented age assurance and taken these steps,’ that may be considered reasonable,” Given said. “Even though age-assurance technologies are flawed, whose fault is that? Should companies be held accountable for tools that are not—and may never be—100 percent foolproof?”
“That’s really the crux of it: what the courts will deem reasonable,” she added.
