There are better sources of parenting advice than tech billionaires, but we definitely should pay heed to how few of them let their children use their products.
Steve Jobs famously would not let his own kids have tablets. YouTube co-founder Steve Chen doesn’t want his children watching short-form content. Snap chief executive Evan Spiegel limits his kids to an hour and a half of screen time a week.
Then there’s Meta chief Mark Zuckerberg, the original brogrammer, who steers his children to use screens for communication — calling grandparents — rather than “passive consumption.”
The uneasy suspicion tech bros knew just how harmful their products were for other people’s children now has been confirmed in court.
A Los Angeles jury has ordered Meta and YouTube to pay US$6 million to a young woman identified as KGM, finding the platforms knowingly designed features that were addictive and harmful to youth. (TikTok and Snap settled out of court.)
Memos entered into evidence show the company knew about tens of millions of underage accounts on Instagram, where kids younger than 13 simply lie about their birthday.
In a group chat from 2020 marked “highly confidential,” an unidentified administrator delivered the staggering indictment: “We are causing reward deficit disorder, because people are binging on IG so much they can’t feel reward anymore … like their reward tolerance is so high.”
Meta says it will appeal. No doubt it’s eager to choke off the fire hose of accountability that surely is coming.
KGM’s lawsuit is one of thousands launched against multiple platforms. A New Mexico jury has ordered Meta to pay $375 million for failing to safeguard children from online predators; the state attorney general is calling for regulation and design changes, including age verification that actually works.
Weighed against billions in profits each quarter, multimillion-dollar judgments alone are unlikely to shift the calculus that has permitted social media companies to prioritize profit and engagement over children’s health.
There are two plausible ways to effect change: meaningful industry regulation, or a mass exodus of users.
In December, Australia led the charge with the world’s first social media ban for youth younger than 16, affecting TikTok, Facebook, Instagram, YouTube, Snapchat and Threads. Within a month, 4.7 million accounts had been deactivated nationwide.
Their success has sparked legislation in France, Denmark and Indonesia. Brazil is going beyond social media, imposing new age-verification mechanisms for all platforms with content inappropriate for minors, on pain of fines up to $10 million. It also ordered them to avoid features that drive compulsive use, such as infinite scrolling.
Meanwhile in Canada, a recent Angus Reid poll found 75 per cent of adults support a full ban on social media platforms for kids younger than 16. Responses show staggering, near-unanimous concern over social media addiction, mental health impacts, misinformation, cyberbullying and online predators.
Editor’s Picks
Every one of these concerns is well-founded, and potentially has devastating consequences. Yet we’ve taken more federal action to prevent kids from eating laundry detergent pods than we have to regulate access to platforms where kids were goaded into the reckless challenge in the first place.
Past efforts have focused on enforcement measures to halt dangerous content, including bullying and intimate images. The government has taken two passes at an Online Harms Act, only to watch the bill die on the order paper when a federal election was called last spring.
That effort deserves revived attention, but we need to think bigger.
Children deserve protection not only from the criminal misuse of online platforms, but also from implicit harms in their design. They also deserve a voice in what that protection looks like.
Clearly, the adults have more to learn.
Robin Baranyai is a Postmedia News columnist.
write.robin@baranyai.ca
