A Glimmer of Transatlantic Tech Consensus: Child Safety  | #childsafety | #kids | #chldern | #parents | #schoolsafey


It’s an area of agreement across the Atlantic Ocean: the internet harms children, and regulation is required. The problem is determining what to do. Companies and regulators are squabbling over what measures will be effective and who should be responsible for verifying age controls. 

What’s indisputable are the widespread concerns over online safety. Nine out of ten EU citizens “consider protecting children online an urgent concern,” according to a March Eurobarometer survey. In the US, 46% of American teens between the ages of 13 to 17 report being bullied or harassed online, a 2022 Pew Research survey found. A recent all-day Federal Trade Commission workshop on kids’ online safety emphasized a clear message: The internet has become dangerous for kids, and strict new laws are required to protect them.  

Regulators are rushing into action. The European Commission is rolling out an age verification app this summer. It recently published guidelines for how online platforms should verify users’ age, set children’s accounts to private by default, set child-friendly reporting support, and how content should be recommended to children.  

In the US, the Kids Online Safety Act, or KOSA, enjoys bipartisan support in Congress. The Senate approved it last year by 91-3, though it died in the House amid concerns from Republican leadership and digital rights groups that it would lead to censorship. The bill would hold social media companies responsible for taking “reasonable” care to avoid product design features that put minors in danger of self-harm, substance abuse, or sexual exploitation. It would also require online platforms to activate their strongest privacy settings by default for minors and allow them to disable “addictive” product features.  

In Europe, child safety groups and national governments are pushing Brussels to go further and faster. Denmark’s wellbeing commission recommended prohibiting social media in schools and limiting children under 13’s access to smartphones or tablets. France’s President Emmanuel Macron wants to ban children under 15 from social media.  

Get the Latest

Sign up to receive regular Bandwidth emails and stay informed about CEPA’s work.

Tech companies are divided on how to respond. While Apple supports the proposed US law, a fight over who should be responsible for checking children’s age has erupted. Meta says companies running operating systems or app stores, not platforms, should be accountable: Google’s Android, Microsoft’s Windows, and Apple’s IOS. The Facebook owner is vocal about its stance, placing ads on bus stops across Brussels.  

A similar debate is playing out in the US. Congress and several states are eyeing age-verification laws requiring app store operators to identify minors. Not surprisingly, Google and Apple disagree. They point out that Europe’s Digital Services Act requires all online platforms to offer age-appropriate experiences. In a blog post this month, Google said that Meta’s proposal “fails to cover desktop computers or other devices that are commonly shared within families” and “could be ineffective against pre-installed apps, as Meta’s often are.” 

The tech companies are divided on other issues, too. Apple has published a white paper and rolled out child safety features that include the ability for parents to select their child’s age range when they set up the device – information that they can then choose to share with apps that want to restrict access to certain content. Meta, in contrast, champions a ‘child safety curriculum’ targeting children and young people with the goal of teaching them about how to recognize grooming, sextortion scams, and online exploitation. YouTube leverages artificial intelligence to estimate a user’s age to “help provide the best and most age-appropriate experiences and protections.”  

Although tech companies agree that online child safety must be addressed, they fear legislation is too stringent. Meta has led the opposition to KOSA, arguing that it limits freedom of speech. “The proposed legislation fails parents because it won’t help a single child with online safety or address parents’ concerns,” argues Amy Bos.  

But the lobbying may backfire. KOSA now has a new lease on life: Apple recently sent a letter to the Senate cosponsors expressing support for the proposed law, and House Speaker Mike Johnson has promised to get it passed. The White House has yet to support the legislation explicitly, though Elon Musk and Donald Trump Jr. have expressed support. First Lady Melania Trump pushed ahead with the Take It Down Act, which criminalizes non-consensual intimate imagery online and requires online platforms to remove it quickly.  

It will be a test: child safety will be a thermometer of how far Washington and Brussels turn the heat on big tech. Europe has pledged to simplify onerous tech regulations, and the Trump administration vows to pursue deregulation. Yet, both could impose strong new rules on how tech treats teens. 

William Echikson is a Non-resident Senior Fellow with the Tech Policy Program and editor of the online tech policy journal Bandwidth at the Center for European Policy Analysis (CEPA). 

Elly Rostoum is a Google Public Policy Fellow with the Center for European Policy Analysis (CEPA). She is a Lecturer at Johns Hopkins University. 

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.

CEPA Europe’s Tech & Security Conference in Brussels.

Learn More

Read More From Bandwidth

CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.

Read More

————————————————


Source link

National Cyber Security

FREE
VIEW