Each time a reluctant tech CEO is dragged before Congress to answer questions about the harms that take place on their platforms, I strive to keep an open mind. Perhaps this will be the time, I tell myself, that we hear a productive discussion on the much-needed reforms that tech companies are often too slow to implement.
But while Congress is generally more educated on tech subjects today than it was when the backlash began in 2017, the hearings still play out much as they did at the beginning: with outraged lawmakers scolding, questioning, and interrupting their witnesses for hours on end, while bills that might address their concerns continue to languish without ever being passed. With so little of substance accomplished, the press can only comment on the spectacle: of the loudest protesters, the harshest insults, and the tensest exchanges.
After five hours of combative testimony, Wednesday’s Senate Judiciary Committee hearing on child safety appears destined to be remembered mostly as a tech hearing like any other: long on talk, and short of much hope that it will lead to a bill being passed.
The event’s signature moment was the seemingly impromptu apology that Meta CEO Mark Zuckerberg offered to protesters at the audience. Here’s Angela Yang at NBC News:
“I’m sorry for everything you’ve all gone through,” Zuckerberg said after Sen. Josh Hawley, R-Mo., pressed him about whether he would apologize to the parents directly. “It’s terrible. No one should have to go through the things that your families have suffered.” […]
After he apologized, Zuckerberg told parents that “this is why we invest so much and are going to continue doing industry-leading efforts to make sure that no one has to go through the types of things that your families have had to suffer.”
I say “seemingly” because Meta was eager to get the word out: Zuckerberg’s fully transcribed apology arrived in my inbox moments after he gave it via a Meta spokeswoman, and the company also posted his remarks to social networks.
Ultimately, though, the goal of the hearing was not to get an apology from one of the CEOs. It was to press the leaders of Meta, TikTok, Snap, Discord, and X on the steps they take to prevent children from experiencing various harms that they face on those platforms: bullying, grooming, extortion, and many more.
Ahead of the hearing, Meta’s former head of youth policy shared “red herrings” to look for at the hearing that participants would use to dodge the real issues: invoking one’s status as a parent; sharing the number of people the company has working on the issue; sharing the number of features the company has introduced to protect children; promoting all the conversations the company has had with stakeholders.
Touting those numbers allows companies to escape scrutiny of how their corporate structure and design processes often relegate child safety to the sidelines, Vaishnavi J. wrote.
She suggested alternative questions: “How do you incorporate responsible design into your product development processes? What are your internal review processes and escalation paths to ensure that any existing or new product meets a predetermined set of online safety requirements? Over the last five years, how often have you blocked products from launching because they were not safe enough for children, or withdrawn products from the market after receiving feedback on the harms they were causing?
These are good questions, and I did not hear any good answers to them at Wednesday’s hearing.
Instead we got theater. Sen. Lindsay Graham told Zuckerberg, “You have blood on your hands.” Sen. Tom Cotton pursued what the Washington Post accurately described as “a McCarthy-esque line of questioning” against TikTok CEO Shou Zi Chew, whom he repeatedly asked about his citizenship and whether he was a member of the Chinese Communist Party. (Chew is Singaporean.)
And, as usual, senators took the chance to play a round of do-you-support-my-bill-yes-or-no with the assembled CEOs, cutting off any non-yes or -no’s before any nuance could be added to the discussion.
The three first-time congressional witnesses benefited from senators’ focus on the veterans. Discord CEO Jason Citron appeared to struggle but was mostly left alone, as was Snap CEO Evan Spiegel. X CEO Linda Yaccarino spun out an elaborate fantasy about X being a 14-month-old company, as if Twitter had never existed, and somehow got away with it.
Somewhere in all this, a handful of good ideas were heard. Zuckerberg pushed persuasively for age verification at the device level, which among other things would prevent parents from having to navigate child safety controls inside dozens of individual apps on their children’s phones. Some senators pushed for expanded resources for law enforcement to investigate and prosecute those trading child sexual abuse material.
And as Alicia Blum-Ross, former policy director at Twitch, noted today, platforms have gradually begun to take more seriously the idea that apps should be designed differently for teens than for adults. She argues that pushing for changes in user experience will likely benefit teens more than blocking them from using social media.
“A more restrictive default, combined with a well-timed forced-choice in the user experience, can provide the friction needed for a teen to reconsider a risky post or comment,” she wrote. “Age-tuned settings, rather than blocking access, is far more palatable for older teens than creating a walled garden they won’t use, leading them to seek out platforms with fewer protections in place.”
All that said, I still worry that both the Senate and the CEOs are falling into the trap of techno-solutionism. There’s no doubt that tech companies can and should reduce harm by working to reduce the spread of bullying, harmful content, CSAM, and extortion.
But it would be a mistake to lay the broader teen mental health crisis at the feet of tech companies alone. As researcher danah boyd, who has long studied children and social media, wrote this week in a piece criticizing the Kids Online Safety Act:
Bills like KOSA don’t just presume that tech caused the problems youth are facing; they presume that if tech companies were just forced to design better, they could fix the problems. María Angel pegged it right: this is techno-legal-solutionism. And it’s a fatally flawed approach to addressing systemic issues. Even if we did believe that tech causes bullying, the idea that they could design to stop it is delusional. Schools have every incentive in the world to prevent bullying; have they figured it out? And then there’s the insane idea that tech could be designed to not cause emotional duress. Sociality can cause emotional duress. The news causes emotional duress. Is the message here to go live in a bubble?
The solution is not “make tech fix society.” The intervention we need to an ecological problem is an ecological one.
For all the hearing’s flaws, I do believe tech companies should face pressure to limit the harm on their platforms. Recent revelations from the state attorneys general lawsuit against Meta have laid out in disturbing detail the extent to which the company identified risks to young people and did too little to reduce them.
But we shouldn’t view the platforms in a vacuum, either. Whatever platforms do to support teens won’t change the fact that mental health care remains broadly inaccessible, dozens of school shootings take place every year, and teens continue to suffer the traumatic effects of living through a global pandemic.
Tech companies may indeed have teens’ blood on their hands, as Graham told Zuckerberg. But we should never forget that Congress does, too.
X’s day in court
The first National Labor Relations Board hearing for Elon Musk’s X Corp. wrapped up yesterday. We don’t have a decision — the briefing period alone extends until March — but the facts don’t look good for Musk’s company.
To recap, this case involves Yao Yue, a former principal engineer at X who was fired on November 15, 2022 after tweeting the following (and posting a similar message on Slack): “Don’t resign, let him fire you. You gain literally nothing out of a resignation.”
Yue filed an unfair labor practice charge last March, and the NLRB issued a complaint alleging the speech was protected and the firing was illegal.
Musk’s lawyers, from Morgan, Lewis & Bockius — the same firm representing SpaceX in a similar case — argued the complaint was “dead on arrival” because, in their view, Yue was a supervisor (a classification that is typically not covered by the National Labor Relations Act), and her message to colleagues was insubordinate.
Yue was a manager on the infrastructure team prior to the November 4 layoffs. But afterward, when X went through a series of reorganizations, she was demoted to an individual contributor. These facts were supported by two witnesses, including former global head of infrastructure Nelson Abramson, and evidence introduced by Musk’s team which inadvertently showed that Yue’s teammate was seeking work from home approval from Yue’s manager rather than Yue.
Nelson estimated that prior to the Nov. 4 layoffs, the infrastructure team had roughly 1,000 people on it, including around 150 managers. After the layoffs, when Musk specifically targeted managers, the team had maybe 10 or 15. He noted that after Nov. 4 Yue wasn’t a manager and had no authority as a manager.
The second point, that Yue’s tweet was insubordinate, doesn’t seem to hold water. Yue was engaging in collective action, telling her colleagues that it wasn’t in their best interest to resign, rather than ordering them to disobey Musk’s directive. “I’d be shocked if this board found that to be insubordination,” former labor board member Wilma Liebman told Bloomberg.
Musk’s lawyers didn’t even seem to understand the difference between a quote tweet and a reply. At one point, they tried to say that Yue’s offending tweet had enormous reach, because it was a “reply” to my tweet, and my account had tens of thousands of followers. Except it wasn’t a reply — it was a quote tweet, which meant it primarily reached Yue’s followers, of which she had fewer than 5,000.
The wheels of justice grind slowly, and a decision won’t arrive for at least a month. Should the NLRB find that X did in fact violate federal labor law, Yue could receive back pay — and the company could be forced to issue a notice informing employees of their right to organize.
— Zoë Schiffer
Build Relationships. Catalyze Your Career.
The best leaders are lifelong learners who surrounding themselves with a close knit community of people who offer support and insight. With the rate of change in tech, growing alongside other top leaders who understand the challenges you face is the difference between having a job and the career you want. Round is the private network for senior product and engineering leaders. With peer-based learning, member events, and a vibrant digital community, Round is a catalyst for your career in tech. Apply to join and mention Platformer to skip the waitlist.
On the podcast this week: Kevin and I try Apple’s Vision Pro. Then, we react to clips from Wednesday’s Senate hearing over child safety. And finally, we dig into how a single car accident took down Cruise.
Apple | Spotify | Stitcher | Amazon | Google | YouTube
- Lawmakers are bracing for an influx of AI-generated child sexual abuse material, and are now looking to address the complicated issue of detection and authentication. (Eileen Sullivan / The New York Times)
- OpenAI says early tests show that GPT-4 “at most” poses a slight risk of helping people create biological threats. Phew! (Rachel Metz / Bloomberg)
- The DEFIANCE Act was introduced in Congress in response to AI-generated non-consensual nude photos of Taylor Swift, aiming to let victims sue their creators. (Adi Robertson / The Verge)
- The FCC is introducing a proposal to criminalize unsolicited AI-generated robocalls, following the fake Biden call during the New Hampshire primary. (Kevin Collier / NBC News)
- Tech lobbyists are mounting a campaign against several states drawing up data privacy laws, seeking to undercut the most significant aspects, including a private right of action and opt-in data protections. (Suzanne Smalley / The Record)
- Mark Zuckerberg says Apple’s stringent App Store developer rules introduced to comply with the Digital Markets Act are “onerous” and doubts developers will choose alternative app stores. (Sarah Perez / TechCrunch)
- Major search engines, including Google and Bing, are “one-click gateways” to harmful content related to self-harm and suicide, a report by the UK’s OfCom found. (Ingrid Lunden / TechCrunch)
- YouTube and Koo, an X alternative specializing in Indian languages, are allowing policy-violating misogyny and hate speech on their platforms in India despite the content being reported, a report found. (Vittoria Elliot / WIRED)
- Lots of earnings news today:
- ByteDance CEO Liang Rubo reportedly told staffers that the company was behind on generative AI and warned of complacency, after other Chinese executives echoed the same thing. (Jane Zhang / Bloomberg)
- Universal Music Group pulled its music off of TikTok after failed negotiations, so songs from artists like Taylor Swift, Drake and Ariana Grande will no longer be available on the app. TikTok put out a scathing statement about UMG’s “greed” in response. (Todd Spangler / Variety)
- Apple’s Tim Cook is all in on the Vision Pro, saying that he knew for years that the company would get to this point, but it’s hard to predict where AI and spatial computing will go. (Nick Bilton / Vanity Fair)
- Google’s internal AI ethics division, which reviews products for compliance, is reportedly being restructured following the departure of its leader. (Paresh Dave / WIRED)
- Google launched ImageFX, an AI image generator powered by Imagen 2. (Kyle Wiggers / TechCrunch)
- Google Maps is getting generative AI, allowing users to ask for restaurant and shopping recommendations. (Andrew J. Hawkins / The Verge)
- YouTube and Facebook are by far the most-used platforms by US adults, but TikTok has grown significantly since 2021, a Pew Research Center survey says. (Jeffrey Gottfried / Pew Research Center)
- Meta is reportedly deploying a second generation of in-house custom AI chips to its data centers this year. (Katie Paul, Stephen Nellis and Max A. Cherney / Reuters)
- The Meta Quest is also getting an update to play the spatial videos you can shoot with the newest iPhones. (Scott Stein / CNET)
- Paid ChatGPT users can now bring GPTs, third-party apps based on OpenAI’s models, into any chat. (Kyle Wiggers / TechCrunch)
- Snap is recalling and refunding all of its Pixy flying selfie drones because their batteries pose a fire hazard. Feels like Snap can’t catch a break lately. (Sean Hollister / The Verge)
- Hulu is starting to ban users from sharing passwords outside of their households, effective March 14. (Alex Weprin / The Hollywood Reporter)
- Shopify is rolling out new features, including an AI image editor for merchants to enhance product images. (Ivan Mehta / TechCrunch)
- Pinterest CEO Bill Ready criticized social media and stressed the importance of using AI to create positivity in this interview. (Hannah Murphy / Financial Times)
- This year’s mass layoffs are different from last year’s — more targeted layoffs for big companies, but bigger cuts for startups looking to survive. (Mike Isaac / The New York Times)
- The Allen Institute for AI is releasing several open-source generative AI language models, along with the data used to train them, for developers to use. (Kyle Wiggers / TechCrunch)
- AI is being used to speed up drug developments and allow researchers to identify treatments faster, but much of the effectiveness is still untested. (Naomi Kresge and Ian King / Bloomberg)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
Talk to us
Send us tips, comments, questions, and your online child safety solutions: email@example.com and firstname.lastname@example.org.