[ad_1]
Meta is expanding its safety features to better protect children on Instagram. The company announced Wednesday it will apply its strictest settings to accounts that feature children, even if they are run by adults. This change follows the removal of over 635,000 accounts for predatory behavior.
The move comes after years of pressure from parents, advocates, and global regulators. These groups have long demanded the company do more to shield young users from online exploitation and scams. This latest update appears to be a direct response to that sustained criticism.
New Protections for Child-Focused and Teen Accounts
In a detailed announcement, Meta confirmed it is extending its “Teen Account” protections to adult-managed accounts that primarily feature children. These so-called “child influencer” accounts will now default to the platform’s strictest direct message settings to prevent unwanted contact.
The “Hidden Words” feature, which filters out offensive comments, will also be automatically enabled. The company will also make these accounts harder for potentially suspicious users to find in search and will hide comments from such users on their posts.
For all teen users, Meta is rolling out new tools within direct messages. A new “Safety Tips” icon will provide quick access to information and controls, including a new combined block-and-report option. This streamlines the process of flagging and stopping unwanted interactions.
Meta’s data suggests these prompts are effective. The company stated, “in June alone, they blocked accounts 1 million times and reported another 1 million after seeing a Safety Notice,” indicating that teens are actively using the tools. Furthermore, its nudity protection feature has a 99% adoption rate and prevented 45% of forwards in May.
This action is bolstered by aggressive enforcement. Meta revealed it recently removed nearly 135,000 Instagram accounts for leaving sexualized comments on posts featuring children. An additional 500,000 linked Facebook and Instagram accounts were also taken down as part of the sweep.
A Reaction to Mounting Regulatory and Public Pressure
These updates did not occur in a vacuum. They follow a period of intense public and regulatory scrutiny. In April 2025, grieving families and advocates protested outside Meta’s NYC headquarters, demanding accountability for online harms that led to tragic outcomes for their children.
The pressure is also legislative. In Europe, Meta faces formal proceedings under the powerful Digital Services Act (DSA). The European Commission is investigating whether the company’s platforms have addictive designs and if its age verification tools are effective enough.
Former European Commission Executive Vice President Margrethe Vestager had been clear about the DSA’s intent, stating at the time, “with the Digital Services Act, we established rules that can protect minors when they interact online.” Violations could lead to substantial fines, creating a powerful incentive for Meta to demonstrate compliance.
In the United States, the legislative landscape is similarly fraught. The Kids Online Safety Act (KOSA), a bipartisan bill Meta previously lobbied against, was reintroduced to Congress in May 2025 after failing to pass late last year. Analysts suggest this safety push is timed to placate lawmakers.
Child safety advocates, however, remain skeptical. Groups like the NSPCC and Fairplay argue that these features, while welcome, are insufficient.
The Broader Industry Battle Over Responsibility
Meta’s focus on its own platform’s tools exists alongside a contentious industry-wide debate over where the ultimate responsibility for child safety lies. For months, Meta has advocated for app-store-level age verification and a pan-European “digital majority age”.
This position puts it in direct conflict with rivals like Google. Following the passage of a Utah law mandating app store verification, Google accused Meta of trying to “…offload their own responsibilities… new risks to the privacy of minors, without actually addressing the harms that are inspiring lawmakers to act.” Google argues that centralizing age data creates new privacy risks for minors.
Instagram’s policy chief, Tara Hopkins, has defended Meta’s stance, arguing, “I think it makes much more sense that this is done at the ecosystem, app store, operating system level,” framing it as a more efficient, ecosystem-wide solution.
This debate shows the complex challenge facing Big Tech. While parents and regulators demand action, the world’s largest platforms disagree on the best path forward. The US Surgeon General has warned that “adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” highlighting the urgent need for effective solutions.
[ad_2]
————————————————
