Meta is facing a reckoning over its child safety practices as a trial surfaces fresh allegations that the company prioritized profit incentives and engagement over protecting children.
The landmark trial in New Mexico has now completed its fifth week, with the state attorney general resting the case on 5 March. Proceedings are expected to continue for another week as Meta presents its defense before the jury begins deliberations.
Central to the case are internal company documents obtained by the attorney general’s office during discovery, including emails between Meta executives flagging urgent issues of exploitation on Facebook and Instagram.
“Data shows that Instagram had become the leading two-sided marketplace for human trafficking,” stated one email to Adam Mosseri, the head of Instagram, sent from a member of Meta’s product team in 2019, which was read in court.
Prosecutors have presented evidence they say demonstrates delays and deficiencies in Meta’s ability to detect and report harms to children on its platforms, including the distribution of child sexual abuse material – photos and videos of the sexual exploitation of children – and child trafficking.
In both the New Mexico trial and concurrent court proceedings in Los Angeles, Facebook and Instagram features have also come under scrutiny for their alleged impact on children’s mental health. The plaintiffs claim the social networks are intentionally addictive and amplify content promoting self-harm, suicidal ideation and body dysmorphia.
The defense has vigorously rejected the attorney general’s allegations as “sensationalist, irrelevant and distracting arguments” and that it goes to great efforts to make its platforms safe and continues to invest in new protective features for teens. The jury has also heard from company executives, including Mosseri and Mark Zuckerberg, Meta’s CEO, who have defended the company’s safety track record. They also argued that with billions of users across Facebook and Instagram worldwide, preventing all crimes and harms that take place on them would not be possible.
“We do our best to keep Facebook safe, but we cannot guarantee it,” said Mosseri, who flew into Santa Fe to be a witness for the defense, after his video deposition played in court earlier in the trial. “Safety is incredibly important to us.”
The lawsuit comes after a two-year investigation by the Guardian, published in 2023, which revealed Meta had difficulty stopping people from using its platforms to traffic children. The investigation is referenced multiple times in the lawsuit’s filings.
The two cases strike at an existential question for Meta: can it protect its next generation of users? If the company wants its social networks to survive and grow, it needs to recruit new, younger users. Meta argues its social networks provide safer environments than any other alternative. The New Mexico attorney general argues the tech company does not adequately serve the teens already on its sites and apps, as do the plaintiffs in the Los Angeles trial, who allege that Meta designs its products to addict young people. Child safety advocates who spoke at the trial in Santa Fe said the encryption of Messenger and an enormous backlog in Meta’s reports of child abuse have stymied its investigations of child exploitation.
Documents from the cases have demonstrated just how much Meta wants young people on its platforms. One internal email reads: “Mark has decided that the top priority for the company in 2017 is teens,” referring to Zuckerberg. The CEO denied on the witness stand the company targets users under 13, its cutoff for creating an account, though he said age restrictions were difficult to enforce.
Meta faces global regulatory scrutiny as it stares down the dual verdicts in the US. Countries around the world are following in the footsteps of Australia’s ban on social media for those under 16. The fourth-most populous country in the world has already committed to an age gate of its own, as has the third-largest state in the US. The New Mexico and Los Angeles trials, if they end with findings of liability for child sexual abuse trafficking and intentional addiction for Meta, may sway more lawmakers to cut the company off from the users it needs.
One of the main pillars of New Mexico’s case is an investigation called “Operation MetaPhile” by the attorney general’s office. Undercover agents posing as girls aged under 13 were contacted by three suspects, who allegedly solicited them for sex after searching for minors through design features on Facebook and Instagram. Two made plans to meet the “girl” at a motel in Gallup, New Mexico.
The agents did not initiate any conversations about sexual activity, according to the state’s court filings. One of their accounts received a surge of activity, with hundreds of friend requests per day, and had accrued 7,000 followers within one month, an investigator said. Despite this activity, Meta did not shut the account down and instead sent it information about how to monetize accounts and grow its following, investigators said.
The state also presented allegations that Instagram’s algorithms connect pedophiles or help them find sellers of child sexual abuse material, which Mosseri labelled as “unfair”.
“I think what we see with these particularly bad actors is they really actively try to work around our systems by disguising things,” Mosseri said. “They try to find each other on our platform.”
Former company executives testified against their ex-employer.
“I absolutely did not believe that safety was a priority, which is the primary reason that I left,” said Brian Boland, former Meta vice-president of partnerships, who spent 11 years at the company before leaving in 2020.
Encrypted Messenger blocked access to evidence of crimes
The New Mexico court heard how Meta’s decision to encrypt Facebook Messenger, which predators have used as a tool to groom minors and exchange child abuse imagery, has blocked access to crucial evidence of these crimes.
In December 2023, Meta introduced end-to-end encryption for Facebook Messenger, its direct messaging platform. Encryption ensures that only the sender and intended recipient can view messages by converting them into unreadable code that is decrypted upon receipt. The messaged content is not stored on Meta’s servers, and is not viewable by law enforcement.
The National Center of Missing and Exploited Children (NCMEC), which is partially funded by Meta, called the move a “devastating blow to child protection”, and its representatives had met with Meta several times in attempts to dissuade the company from implementing encryption, the court heard.
American-headquartered social media companies are required by federal law to report any child sexual abuse material (CSAM), apparent violations of child sexual abuse trafficking, and indications of coercion and enticement of minors on their platforms to NCMEC. Acting as a clearinghouse, NCMEC forwards these “cyber tip” reports to the relevant law enforcement agencies across the US and internationally.
The encryption of Messenger means that “visibility into content or interactions that are occurring is taken away. That doesn’t mean that the abuse stops occurring,” testified Fallon McNulty, executive director of the exploited children division at NCMEC.
She said that Meta submitted 6.9m less reports to NCMEC in 2024, after Messenger’s encryption was implemented, compared with the previous year.
Meta has previously defended encryption as safe because users can report any inappropriate interactions or abuse they experience while using Messenger. Privacy advocates commend encryption as the strongest protection against surveillance by law enforcement.
“We use sophisticated technology to proactively identify child exploitation content on our platform – and between July and September 2025 we removed over 10m pieces of child exploitation content from Facebook and Instagram, over 98% of which we found proactively before it was reported,” a Meta spokesperson said. “We also provide in-app reporting tools, with dedicated options to let us know if content involves a child.”
In her testimony, McNulty highlighted that relying on children to report abuse was not an adequate substitute for the scanning of messages and images now that Messenger was encrypted. According to NCMEC studies, a majority of children choose not to report any abuses or threats made to them on the platforms.
Mosseri said the self-reporting mechanisms on Instagram were not very effective compared with the company’s technological scanning for abuses on the platform, despite Meta’s own claims about the encryption of Messenger. He spoke about plans to encrypt Instagram direct messenger that had been abandoned. It was also determined that encrypting Instagram messages would also make it more difficult to keep children safe on the platform, he said.
He said: “We find that using technology seems to be much more effective than user reports to find bad content.”
Reporting backlogs and errors affected child safety
The jury heard that between May 2017 and July 2021, Meta had a reporting backlog of 247,000 cyber tip reports of potential harms and abuses, which were several weeks or months old when they were sent to NCMEC. Because information about child abuse is often time-sensitive, these backlogs may have meant opportunities to prevent crimes or identify perpetrators were lost.
According to documents presented in evidence, thousands of other cyber tip reports were improperly classified as being low priority. The company did not provide NCMEC with an insight into the cause for the delays and mislabeling. NCMEC regarded the big misclassification as “a serious failing that affected child safety”, McNulty testified.
The jury heard how law enforcement had become frustrated with the lack of detail in some of Meta’s reports, which meant officers could not take further action and investigate them. Law enforcement officers who investigate potential child abuse previously told the Guardian Meta has flooded the cyber tip reporting system with “junk” tips that were useless to law enforcement, and one officer made the same point on the witness stand. Other large platforms had done a better job of providing actionable information in their reports, McNulty said in her testimony.
In 2022, 31 of the country’s 61 Internet Crimes Against Children (ICAC) task forces opted out of receiving some lower-priority cyber tip reports from Meta because they considered the information too poor in quality to be actionable, the jury heard.
The quality issues with Meta’s cyber tips had been “going for years”, and NCMEC expected it to be “resolved sooner”, McNulty said.
“Our image-matching system finds copies of known child exploitation at a scale that would be impossible to do manually, and we work to detect new child exploitation content through technology, reports from our community and investigations by our specialist child safety teams,” a Meta spokesperson said. “We also continue to support NCMEC and law enforcement in prioritizing reports, including by helping build NCMEC’s case management tool and labelling cyber tips so they know which are urgent.”
The Guardian has previously reported that AI-generated tips that have not been confirmed to be reviewed by a social media company employee often cannot be opened by law enforcement without a warrant because of fourth amendment protections. Lawyers involved in such cases say this additional step can also slow investigations into potential crimes.
At the trial, it was revealed that in 2022, more than 14m of Meta’s reports to NCMEC had not involved a human review, meaning they could not be opened by NCMEC or law enforcement without a warrant. The prevalence of unreviewed reports and the resulting impacts on law enforcement had been communicated to Meta several times, McNulty testified.
Teens, addiction, filters and self-harm content affected mental health
In a video deposition played in court, Zuckerberg acknowledged that some users, including children, find Meta’s platforms addictive, which is also the subject of a separate trial taking place in Los Angeles.
Internal documents from Instagram made evident how much the company knew about its tween users and their problems despite its 13-and-over policy, according to the plaintiff’s lawyers. A 2018 presentation from Instagram revealed in the Los Angeles trial reads: “If we wanna win big with teens, we must bring them in as tweens.” Another from 2015 estimated that about 30% of 10-12-year-olds in the US use the photo-sharing app. Yet another detailed a goal of increasing the time 10-year-olds spent on the Instagram app, and one more documented how often 11-year-olds logged on to the app in comparison with older people.
At the New Mexico trial, Ian Russell, whose daughter Molly died by suicide in 2017 after viewing large amounts of harmful content on Instagram, testified for the state about the platform’s potential mental health impacts.
Russell said: “That inescapable stream of harmful content, the cumulative effect that content would have had on a growing brain, a young person, a 14-year-old, turned Molly from that bright, hopeful young person into someone who unbelievably thought she was a burden and a problem and that the best thing for her to do would be to end her life.”
Evidence presented at trial included internal communications about augmented-reality filters on Instagram that allowed users to alter their appearance, such as enlarging lips or eyes. An email from a former Meta employee to Zuckerberg warned that teens using these features would be at greater risk of self-image and mental health issues.
“As a parent of two teenage girls, one of whom has been hospitalized twice for body dysmorphia, I can tell you, the pressure on them and their peers coming through social media is intense with respect to body image,” the former employee wrote.
Jurors heard that a temporary ban was placed on the augmented-reality features in October 2019, and lifted by Zuckerberg in mid-2020.
“It has always felt paternalistic to me that we’ve limited people’s ability to present themselves in these ways, especially when there’s no data I’ve seen that suggests doing so is helpful or not doing so is harmful, and that there’s clearly demand for this type of expression,” the CEO said of his decision.
“Meta bans those that directly promote cosmetic surgery, changes in skin color or extreme weight loss,” a company spokesperson said.
Other internal documents presented in court alleged that Zuckerberg approved allowing minors to interact with artificial-intelligence chatbot companions despite warnings from safety staff that the bots could engage in sexual conversations. Prosecutors also alleged that Meta placed advertisements from companies, such as Walmart and Match Group, alongside content that sexualized children, potentially generating revenue from such material.
“Instagram Teen Accounts have built-in protections which limit who can contact them, and the type of content they see, defaulting them into private accounts and the strictest message settings, so they can only be messaged by people they follow or are already connected to,” a Meta spokesperson said. “Teens under 18 are automatically placed into Teen Accounts, and teens under 16 will need a parent’s permission to make any of these settings less strict.”
Arturo Béjar, a former Meta engineering director who became a whistleblower when his daughter received sexually inappropriate messages from strangers on Instagram, took the stand. Béjar told the court the platform’s recommendation system was “really good at connecting” predators with minors.
When Béjar reported the issue to the company, he said he understood that executives such as Zuckerberg and Chris Cox, chief product officer, already knew this was a problem.
“That’s when I first realized the executive team knows about the harm that’s falling on the product, and they’re choosing not to act on it,” Béjar said. “I don’t think we can trust Mark Zuckerberg and Meta with our kids.”
