November 16, 2023
National Telecommunications and Information Administration
1401 Constitution Ave. NW
Washington, D.C. 20230
Re: NTIA-2023-0008 — Initiative to Protect Youth Mental Health, Safety, & Privacy Online
Dear Chief Counsel Weiner,
On behalf of the R Street Institute (R Street)’s Technology and Innovation Policy team, I appreciate the chance to respond to this request for comment by the National Telecommunications and Information Administration (NTIA) on children’s online health and safety. R Street is a nonprofit, nonpartisan public policy research organization. We are concerned about the regulatory balance between protecting youth from the many potential risks they face online while also protecting the free flow of information and access to speech as defining features of the internet, under the protection of the First Amendment.
It is encouraging that the request for comment acknowledges that social media can have positive as well as negative impacts on youth and teens. Too much of the policy conversation around protecting children online focuses solely on the downside risks of youth accessing social media and the internet without considering the many beneficial uses these technologies have to offer if accessed responsibly.
The tenor of proposals to protect children online has turned toward treating the internet—and particularly social media—as inherently dangerous to minors, including older teenagers. Fringe legislators at the federal and state levels have gone so far as to propose banning social media outright for users under age 18. Other proposals, such as the Kids Online Safety Act and California’s Age Appropriate Design Code, would make social media platforms liable for children being able to access any content that might cause potential harm to their mental health (as defined by regulators).
While the current technopanic assigns much of the blame for mental and physical health issues afflicting our youth to social media and the internet, actual evidence for this causation is lacking. Even in the U.S. Surgeon General’s alarming announcement that social media is driving a mental health crisis among America’s youth, the accompanying report admitted that the issue is more nuanced; that social media has beneficial impacts for many children and teens; and that its negative impacts are contextual rather than universal. The American Psychological Association (APA) 2023 report on minors and social media comes to a similar conclusion, with APA President Thema Bryant stating that “[s]ocial media is neither inherently harmful nor beneficial to our youth.” And Pew Research Center’s 2023 survey found that teens report far more positive than negative effects from their use of social media.
This is not to discount the real harms that exist online, such as violent and pornographic content, child exploitation, bullying and harassment, scams, and myriad other content that could be detrimental to the development of minors (and perhaps adults). Not all of these harms, however, necessarily have a top-down regulatory or legislative solution.
In the face of a wave of legislative proposals at both the state and federal levels to regulate youth access to online services, R Street published a (non-exclusive) set of principles, “Five Ways Not to Keep Kids Safe Online.” We are particularly concerned with the first principle: that well-intended policies aimed at protecting minors’ safety and privacy online could directly or indirectly force online platforms to enact some form of strict age verification. For sites primarily designed to host content that is illegal for minors, such as purveyors of gambling, alcohol or pornography, some degree of age gating makes sense. Requiring general-use internet platforms to know the ages of their users, however, is a much trickier proposition that can have unintended consequences.
To begin with, requiring sites to accurately know whether a given user is a minor means subjecting every user to age verification—adults and youth alike. The means of accomplishing this range from submitting a government ID or credit card to submitting a selfie or facial scan for biometric age estimation. All of these methods are imperfect and pose difficulties that make holding sites to a real-knowledge age-verification standard inadvisable. 
A number of significant studies and commissions have attempted to determine whether there is a more workable method for age gating internet platforms in a way that squares with privacy, free speech and the First Amendment. For example, in 2008, a blue-ribbon, multi-stakeholder task force headed by Harvard University concluded that “there is no one technological solution or specific combination of technological solutions to the problem of online safety for minors.” In 2022, the French national technology regulatory body, Commission Nationale de l’Informatique et des Libertés, undertook a similar review and came to a similar conclusion.
Ironically, given the general desire to further protect the data privacy of both children and adults, mandatory age verification would greatly increase the amount of sensitive personal data that sites have to access and would discourage the minimization of said data. Although privacy-protecting age-identification technologies have advanced, they remain imperfect and generally require submitting to a biometric scan, which is intrusive even if the data from the scan is not kept.
In addition, age verification as a condition to access social media would likely prove unpopular. A poll run by the Center for Growth and Opportunity at Utah State University shows that two-thirds of respondents were uncomfortable sharing ID documentation to access social media sites, and nearly 70 percent were uncomfortable with their children providing either ID or biometric information.
Even regulations that do not establish mandatory age verification outright can have unintended consequences for minors’ access to internet services. Nowhere has this been better illustrated than by the Children’s Online Privacy Protection Act (COPPA), which was aimed at the collection and use of data from children under age 13 without parental consent. Rather than deal with COPPA’s complicated compliance requirements, many websites with user profiles simply defaulted to banning users aged 13 and under, and the relative dearth of online services tailored to this age range may well be a direct consequence of COPPA as well.
Even this solution was only workable because the Federal Trade Commission allowed for self-verification of age instead of making sites liable for knowing the real ages of their users. Should a COPPA-like privacy regime be extended to teens up to age 18, as some have suggested, it could well lead to larger swaths of the internet being barred to all minors. Instead, data privacy concerns relating to teens ought to be addressed as part of a much-needed overall federal data privacy and security framework.
Another popular demand is to mandate that online platforms enable certain content protections by default, such as disabling algorithmic features like customized news feeds or continuous autoplay for video content. However, because content recommendation algorithms are at the heart of what makes social media sites useful to their users, disabling them by default is likely to frustrate youth and adults alike. A better alternative is to embrace further algorithmic tools as part of the solution to protect children from accessing or sharing harmful content.
Some policy changes could be made to help protect minors from danger online, but ultimately, the appropriate level of protection should be decided by parents—not government regulators. Public outcry has caused companies to build even more parental controls and safeguards into their products, and companies should be able to continue innovating and experimenting with ways to provide products that balance the need to protect youth with the need to serve the rest of their users well.
The most useful government intervention may be helping to educate parents about the multitude of existing tools and protections, often built into the devices and software they already use. Meanwhile, the best protection for our digitally immersed youth may also be education in digital literacy. Florida passed a bill adding social media safety and literacy to its public school curriculum in 2023, and it may have a far more beneficial long-term effect on children’s safety than other states’ content regulations and social media bans.
In sum, we hope that the NTIA’s recommendations will emphasize that many of the problems ascribed to social media are best solved collaboratively, with an emphasis on helping parents make sound choices in managing their children’s screen time rather than relying on paternalistic, one-size-fits-all government mandates.
Fellow, Technology and Innovation
R Street Institute
1411 K Street NW
Washington, D.C. 20005
 HB 896, An act relating to prohibiting the use of social media platforms by children, Texas State Legislature; S. 419, Making Age-Verification Technology Uniform, Robust, and Effective Act, 118th Congress.
 S. 1409, Kids Online Safety Act, 118th Congress; AB 2273, The California Age Appropriate Design Code Act, California State Legislature; “We Need to Keep Kids Safe Online: California has the Solution,” 5Rights Foundation, last accessed Nov. 15, 2023. https://californiaaadc.com.
 See, e.g., S. 1409; S. 1291, Protecting Kids on Social Media Act, 118th Congress.
 CS/HB 379, Technology in K-12 Public Schools, Florida State Senate.