Speaking at the first-ever Instagram Safety Camp in Sydney last week, Meta’s global head of child safety, Ravi Sinha, revealed that ongoing conversations are underway with Apple and Google to embed age verification directly into phones, creating a universal and privacy-safe solution to protecting young users across all apps.
The announcement comes amid fierce debate in Australia over legislation that will ban under-16s from social media from December 2025.
While Meta’s latest safety features for teens have been years in the making, Sinha acknowledged the Australian conversation as globally significant and “quite unprecedented.”
“We think it’s just the simplest, most magical solution,” Sinha said of the phone-level age verification system. “When you have the parent set up the phone, you just ask the age of the person they’re setting it up for, and then tell us in a really privacy-protective way… That age can be shared with all the apps they approve of their teen using.”
Sinha confirmed Meta has been pushing for this shift, but the ball ultimately lies in the court of Apple and Google.
“We’ve done everything we can to try to encourage this conversation,” he said. “But we don’t own the app stores”.
Australia is preparing to enforce what could be the world’s strictest social media age limits, with legislation set to prohibit children under 16 from accessing platforms like Instagram. Meta will likely be required to demonstrate the “reasonable steps” it has taken to comply.
Meta noted that the company is awaiting specific guidelines following the government’s recent Age Assurance Technology trials. It anticipates being told which age verification methods are considered compliant, though Sinha reiterated that phone-level solutions would be more consistent and less burdensome for parents.
“The reality is most teens use 40 apps a week,” he said. “It’s hard for parents to stay on top of what app their teen is using, what they’re doing on that app and how to monitor to make sure that they’re having safe experiences”.
The UK has recently seen the introduction of age verification software, with 6,000 sites allowing porn in the region requiring age verification for use. However, a team of “Ethical hackers” showed Sky News that they were able to overcome the verification in just a matter of seconds using widely available technology.
Meta’s Teen Accounts: Not A Reaction, But A Foundation
Despite the timing, Meta insists its new Teen Accounts aren’t a reaction to Australia’s ban. “We’ve been working on these features for years,” Sinha said. “This is not about compliance, it’s about building something smarter, safer, and more practical”.
The new Instagram Teen Accounts, rolling out globally, automatically activate the platform’s most restrictive safety and privacy settings for users under 18. This includes:
- Private accounts by default
- Nudity and bullying filters are turned on automatically
- Limited content recommendations
- Restricted time usage (60 minutes per day)
- ‘Sleep mode’ disables notifications from 10 pm–7 am
- Mandatory parental approval to change safety settings for users under 16
According to Meta’s data, 97 per cent of 13–15-year-olds kept the default protections on, and 94 per cent of parents said the settings made them feel more confident about their teen’s digital life.
“We know teens won’t stop using their phones,” said Sinha. “So our goal is to reduce the likelihood they get pushed into less safe corners of the internet.”
Smart AI, But Still Not Enough
Meta uses machine learning and AI to flag suspicious age declarations, like when someone suddenly adds 10 years to their age profile, and requires verification through additional steps. But as Sinha admitted, teens can still “just lie.”
“We can check the ID at the bar, but someone also needs to check IDs at the door,” he said, likening the current system to a university bar on a Friday night.
That “door” should be the phone, he said, and it’s why embedding age data during device setup could become the industry standard.
The AI systems, he confessed, may also fail when it comes to things like hidden words. For example, words like “unaliving” in place of suicide or “seggs” in place of sex. The team, he said, is always working to identify new words like this, but confessed that it is far from perfect and some may still slip through the cracks as the AI continues to train itself.
While Australia awaits final enforcement guidance, Meta will continue to advocate for cross-platform, privacy-safe solutions that enable parents—not just platforms or governments—to play a leading role.
“We’re building for positive, additive, age-appropriate experiences,” Sinha said. “The trade-offs are worth it if we can help keep young people safe.”
Join more than 30,000 advertising industry experts
Get all the latest advertising and media news direct to your inbox from B&T.
————————————————