(844) 627-8267 | Info@NationalCyberSecurity
(844) 627-8267 | Info@NationalCyberSecurity

Online child safety law blocked after Calif. argued face scans not that invasive – Ars Technica | #childsafety | #kids | #chldern | #parents | #schoolsafey

A California law requiring a wide range of platforms to estimate ages of users and protect minors from accessing harmful content appears to be just as unconstitutional as a recently blocked law in Texas requiring age verification to access adult content.

Yesterday, US District Judge Beth Labson Freeman ordered a preliminary injunction stopping California Attorney General Rob Bonta from enforcing the state’s Age-Appropriate Design Code Act (CAADCA), finding that the law likely violates the First Amendment.

“The Court finds that although the stated purpose of the Act—protecting children when they are online—clearly is important,” Freeman wrote, “the CAADCA likely violates the First Amendment.”

“Specifically,” Freeman said, “the age estimation and privacy provisions thus appear likely to impede the ‘availability and use’ of information and accordingly to regulate speech,” and “the steps a business would need to take to sufficiently estimate the age of child users would likely prevent both children and adults from accessing certain content.”

Netchoice—a trade group whose members include tech giants like Meta, TikTok, Google, and Amazon—filed a lawsuit requesting the preliminary injunction last December. Yesterday, Chris Marchese, the director of the NetChoice Litigation Center, celebrated the court’s decision to grant the injunction.

“We appreciate the district court’s thoughtful analysis of the First Amendment and decision to prevent regulators from violating the free speech and online privacy rights of Californians, their families, and their businesses as our case proceeds,” Marchese said. “We look forward to seeing the law permanently struck down and online speech and privacy fully protected.”

A group of civil society organizations, legal scholars, parents, and youth advocates known as the Kids Code coalition expressed disappointment in Freeman’s decision. They felt the court blocked “a road to accountability for tech companies” and gave “tech companies a free pass to put profit over kids’ safety online.”

As California argued, the Kids Code coalition contends that the CAADCA “is not about speech or content,” but “about designing safe products.” The coalition argued that “the First Amendment does not shield corporations from accountability for their profit-motivated design decisions that endanger kids’ well-being” and “strongly support an appeal of this ruling.”

“Almost every product children use from cribs to car seats is regulated so that they will be safe for children,” the Kids Code coalition said. “Yet social media companies design their products with little regulation, and children are hurt and even die because these products are not required to be designed for the safety of young users. As Big Tech has shown time and again, they will pull out all the stops to continue to profit off of significant harm to kids and teens with impunity.”

Bonta’s press office told Ars that the state has no comment beyond: “We are disappointed by the decision and will respond in court as appropriate.”

CAADCA likely “exacerbates” online harms to kids

Regulators’ attempts to age-gate the Internet have drawn criticism, and courts have repeatedly found that these laws likely run afoul of the First Amendment. But perhaps more troubling, here, Freeman found that CAADCA not only risked restricting speech, but also did not appear to address or mitigate the harms to children identified by the state. Even worse, after California argued that businesses gathering information from children by requiring face scans or other biometric data to estimate user ages was “minimally invasive,” Freeman concluded that enforcing the law could cause more harm than good. “Such measures would appear to counter the State’s interest in increasing privacy protections for children,” Freeman wrote, explaining:

“CAADCA’s age estimation provision appears not only unlikely to materially alleviate the harm of insufficient data and privacy protections for children, but actually likely to exacerbate the problem by inducing covered businesses to require consumers, including children, to divulge additional personal information.”

CAADCA also requires tech companies to submit reports assessing whether their platforms’ designs could harm children—including by exposing them to harmful content—and create a plan to mitigate harms. Freeman was not convinced by California’s argument that the law “has nothing to do with speech,” finding instead that requiring these reports alone “regulate the distribution of speech and therefore trigger First Amendment scrutiny.”

“The State has no right to enforce obligations that would essentially press private companies into service as government censors, thus violating the First Amendment,” Freeman wrote.

Netchoice had argued that “the CAADCA violates the First Amendment and the dormant Commerce Clause of the United States Constitution” and “is preempted by both the Children’s Online Privacy Protection Act (COPPA) and Section 230 of the Communications Decency Act.” But Freeman said that she only had to review the First Amendment challenge to grant the preliminary injunction, leaving the rest to be argued as the case resumes.

Could CAADCA be revised?

According to Freeman, the age estimation provision specifically triggered her concerns that CAADCA would have a “potentially vast chilling effect on speech,” because any business that chooses not to estimate ages would have to “apply the privacy and data protections afforded to children to all consumers.” Even if the government had a substantial interest in protecting kids from harms, Freeman said that “the inevitable effect will be to impermissibly ‘reduce the adult population… to reading only what is fit for children.'”

Freeman seemed to suggest that CAADCA could perhaps be fixed if California were able to more clearly “define what uses of information may be considered ‘materially detrimental’ to a child’s well-being.” Going into more detail than simply forcing platforms to decide “what is in the best interest of the children” would make it clearer for platforms to know what content needs to be kept out of kids’ reach in order to avoid violations—which include “civil penalties of $2,500 per child for each negligent violation and $7,500 for each intentional violation.”

The judge agreed with groups supporting Netchoice that warned the court that “what is ‘in the best interest of children’ is not an objective standard but rather a contentious topic of political debate.” Freeman’s fear was that the law restricting access to harmful content inadvertently “throws out the baby with the bathwater” by preventing platforms from targeting children with beneficial content.

“In seeking to prevent children from being exposed to ‘harmful unsolicited content,’ the Act would restrict neutral or beneficial content, rendering the restriction poorly tailored to the State’s goal of protecting children’s well-being,” Freeman wrote.

For the law to be fixed, it would seem that California would likely need to clearly define what’s considered materially harmful to a child, remove the age estimation provision, and remove the provision requiring platforms to report on design features that could be harmful to kids. But that, Freeman suggested, would make the remainder of the law “obsolete.”

“The only meat left of the Act would be four unchallenged mandates and prohibitions that together would require covered businesses to provide children with obvious tracking signals and prominent and responsive tools to exercise their privacy rights, and to refrain from collecting children’s precise geolocation data,” Freeman wrote.

It’s unclear whether lawmakers will succeed in passing laws that stop platforms from harming kids without improperly restricting speech for everyone, but passing a law requiring platforms to avoid tracking users’ behaviors online, provide more privacy controls, and collect no precise geolocation data sounds like a law that would likely benefit everyone seeking more privacy on the Internet, including kids.

Privacy experts have argued for years that the solution to protecting kids’ privacy online is not to pass laws that restrict kids’ access to the Internet, but laws that prevent platforms from invasive data collection for all users. But while laws like CAADCA often pass unanimously or near unanimously—since pretty much everybody acknowledges online harms exist for kids—comprehensive data privacy laws face much more resistance among regulators.

Notably, Freeman wrote that even California’s privacy law—which was the first comprehensive consumer privacy legislation in the US—is much more limited in scope than CAADCA. Where California’s privacy law “gives users authority to make decisions about their own personal information,” CAADCA “goes far beyond the scope of protections offered by” the privacy law, putting the responsibility back on platforms to identify “any risk of material detriment to children arising from the provider’s data management practices.”

Whether California’s privacy law could be strengthened to resolve fears over kids’ online activity is unclear, but what seems clear to Freeman—who wrote that she was “mindful that the CAADCA was enacted with the unanimous support of California’s Legislature and Governor”—is that CAADCA could cause more harm than the state realizes. And until regulators resolve persistent issues with flawed attempts to age-gate the Internet, kids will seemingly remain in harm’s way, because, Freeman wrote, for kids who increasingly depend on the Internet for education and entertainment, “unplugging is not a viable option.”


Source link

National Cyber Security