Kids groups say they didn’t know OpenAI was behind their child safety coalition | #childsafety | #kids | #chldern | #parents | #schoolsafey


In mid-March, organizers for child safety groups across the country received emails from an organization called the Parents & Kids Safe AI Coalition, asking if they would endorse its list of policy priorities. The listed principles for AI regulation included vague but fairly uncontroversial suggestions such as age verification, parental controls, and a prohibition on targeting advertising toward kids. “We believe it is important to demonstrate broad, visible support from parents, educators, community groups, and child-advocacy organizations to make clear that families expect action on AI this year,” some of the emails said.

What many of them did not state was that the Parents & Kids Safe AI Coalition was funded entirely by OpenAI, the world’s most popular AI chatbot company. The principles it was asking nonprofits to endorse mirrored policy proposals in a child safety bill OpenAI co-sponsored and filed as a ballot initiative this year, and is now hoping to get the California Legislature to adopt (opens in new tab).

As it works to get its preferred legislation passed in California, OpenAI has assembled a growing coalition of supporters that, in some cases, were unaware of its role in founding and funding the coalition.

Even the leaders of some of the groups that initially joined the coalition told The Standard they were not aware of OpenAI’s level of involvement. At least two original members said they learned of OpenAI’s role only after the coalition was formally announced and have since removed themselves from the group. 

“It’s a very grimy feeling,” said one of the nonprofit leaders, who asked not to be named for fear of repercussions. “To find out they’re trying to sneak around behind the scenes and do something like this — I don’t want to say they’re outright lying, but they’re sending emails that are pretty misleading.”

In a statement provided by a coalition spokesperson in response to The Standard’s questions, six members and OpenAI Vice President of Global Policy Ann O’Leary said they were “fighting for the strongest child AI safety law in the nation.”

“Across organizing efforts (opens in new tab), media coverage (opens in new tab), and even the website (opens in new tab) — our supporters and financial backers alike are proudly speaking up to protect kids and empower parents,” the statement said.

Other child safety groups told The Standard they refused to join the coalition over concerns with OpenAI’s involvement. Josh Golin, executive director of FairPlay, said he hoped OpenAI would take the hint and step back from policy discussions.

“I want them to get out of the way and let advocates and parents and public health professionals whose charge is the well-being of children pass the legislation they think is best for kids,” Golin said. “I don’t want OpenAI to write their own rules for how they interact with children.” 

Sam Altman’s company has opposed other efforts at kids’ safety regulation in the past. (Photo by Andrew Harnik) | Source: Getty Images

OpenAI knows it is facing a reckoning on kids’ use of its products. At least eight lawsuits claim that its lead product, ChatGPT, contributed to the deaths of users — including a 16-year-old California boy who died by suicide. Meanwhile, more than 20 states proposed legislation last year to regulate kids’ use of AI. A federal bill on the subject passed out of a U.S. House of Representatives committee early last month; the White House released its own proposals a few weeks later.  

In California, OpenAI has moved to counter multiple child safety initiatives. Lobbyists for OpenAI and other major industry groups actively opposed a state bill last year that would have tightly restricted kids’ access to AI chatbots; Gov. Gavin Newsom later vetoed it and passed less stringent regulations instead. When kids tech safety group Common Sense Media filed a ballot initiative in the fall to pass stronger protections directly via voters, OpenAI introduced a competing initiative to try to defeat it. 

In January, to the surprise of many advocates, OpenAI and Common Sense Media announced that they were teaming up to work on a compromise ballot initiative, the Parents & Kids Safe AI Act. OpenAI pledged $10 million to the campaign, and on Jan. 8, three lawyers for the company formed a PAC to promote it: the Parents & Kids Safe AI Coalition. 

The Parents & Kids Safe AI Act faced swift backlash from child safety groups, some of which signed a letter (opens in new tab) suggesting the measure would shield AI companies from liability and undermine age verification processes, among other concerns. In February, according to Politico (opens in new tab), a coalition of advocacy groups met with Common Sense to push back on the proposed legislation and question why the organization would cooperate with OpenAI. 

Shortly after that meeting, OpenAI and Common Sense announced they were putting the ballot campaign on hold and would instead try to negotiate with the Legislature on a solution — though they did not drop the initiative entirety, saying they could resuscitate the effort in 2028 if the Legislature does not come up with an adequate fix. 

A phone screen shows an AI chat app with options to chat with “Bob the Robot,” a sarcastic robot, and “Alvin the Alien,” a quirky blue-skinned alien.
Newly proposed California rules would regulate kids’ interactions with chatbots | Source: Photo by Samuel Boivin/NurPhoto via Getty Images

Emails reviewed by The Standard show that public affairs firms retained by the OpenAI-funded PAC started reaching out to child safety groups to garner support for the ballot initiative in February. The body of several of these emails did not mention OpenAI — saying simply that the initiative was “sponsored by Common Sense” — though a promotional flyer attached to the email contained a small, legally required disclosure at the bottom noting that OpenAI was the committee’s top funder. (Some emails provided by a spokesperson for the coalition noted that the initiative was sponsored by both Common Sense and OpenAI, and press coverage at the time noted the involvement of both.)

By March, the public affairs firms had switched to asking many of the same groups to endorse a slate of “core policy principles for AI child safety,” which they hoped would serve as the “framework for state legislation later this year.” Representatives said they were reaching out “on behalf of the Parents & Kids Safe AI Coalition” to demonstrate “broad, community-based support for strong children safety protections in the age of AI.” 

The body of such emails reviewed by The Standard did not mention OpenAI’s involvement; the attached flyer removed even the fine-print funding disclosure. 

When the coalition made its public debut March 17, the announcement did not mention OpenAI either, instead describing a “growing alliance of parents, educators, children’s advocates, social and civil justice organizations, business leaders, technology companies, and community groups.” (There is no indication that any other technology company is part of the coalition.) 

The home page of the coalition’s website also does not mention OpenAI — even in a rotating banner of the members that includes the logos of a dozen advocacy organizations. A page titled “Our Coalition” does not list OpenAI as a member.

The “newsroom (opens in new tab)” page does link to coverage of OpenAI’s involvement in the group, though most of the stories were published after the coalition launched. A Politico article (opens in new tab) linked on the page accurately describes the coalition as “OpenAI-backed” and notes the $10 million donation the company made to the ballot committee. 

Still, representatives of three of the 14 organizations listed as coalition members in the March 17 announcement said they were unaware of OpenAI’s involvement until after the launch. The head of one organization said the phone call from The Standard was the first he’d heard of the OpenAI connection, though he didn’t see a problem with it. Two of the organizations said they asked to be removed from the list of supporters after learning about it.

Tom Lyon, a professor at the University of Michigan and expert on corporate political influence, reviewed the coalition website and said it meets the “classic definition of astroturfing” — a term used to describe corporations forming groups to support their aims with little disclosure of their involvement.

Even if OpenAI had made some disclosures of its support, Lyon said, “people don’t have the time or the motivation to do a lot of homework to figure out who’s funding what, so it’s easy to fool people.”

“Even if you want to give them the benefit of the doubt and say, ‘Oh well, [OpenAI] did put their name on this previous ballot initiative’ — what percentage of people are going to notice that?” he added.

It’s unclear what impact the coalition has had. In emails, representatives for the group suggested it was working on the legislation with Assemblymembers Buffy Wicks and Rebecca Bauer-Kahan. The assemblymembers and Sen. Steve Padilla last week introduced AI safety legislation (opens in new tab) that echoed some ideas from the OpenAI-Common Sense initiative, but both their offices told The Standard they had not spoken to the coalition about the bill. A representative for Bauer-Kahan said she did not even know who the coalition’s members were.

A spokesperson for Common Sense, meanwhile, said the organization is not part of the OpenAI-backed coalition and is continuing to speak with legislators about child safety initiatives on its own.

————————————————


Source link

National Cyber Security

FREE
VIEW