Info@NationalCyberSecurity
Info@NationalCyberSecurity

Elon Musk’s X fined $380K over “serious” child safety concerns, watchdog says – Ars Technica | #childsafety | #kids | #chldern | #parents | #schoolsafey


Today, X (formerly known as Twitter) became the first platform fined under Australia’s Online Safety Act. The fine comes after X failed to respond to more than a dozen key questions from Australia eSafety Commissioner Julie Inman Grant, who was seeking clarity on how effectively X detects and mitigates harms of child exploitation and grooming on the platform.

In a press release, Inman Grant said that X was given 28 days to either appeal the decision or pay the approximately $380,000 fine. While the fine seems small, the reputational ding could further hurt X’s chances of persuading advertisers to increase spending on the platform, Reuters suggested. And any failure to comply or respond could trigger even more fines—with X potentially on the hook for as much as $493,402 daily for alleged non-compliance dating back to March 2023, The Guardian reported. That could quickly add up to tens of millions if X misses the Australian regulator’s deadline.

“If they choose not to pay, it’s open to eSafety to take other action or to seek a civil penalty through the courts,” Inman Grant told the Sydney Morning Herald. “We’re talking about some of the most heinous crimes playing out on these platforms, committed against innocent children.”

While eSafety has reported that all the major tech companies—including Meta, Apple, Microsoft, Skype, Snap, Discord, TikTok, Twitch, X, and Google—have “serious shortfalls” when it comes to tackling child sexual abuse materials (CSAM) and grooming, X’s non-compliance “was found to be more serious.”

In some cases, X left responses “entirely blank,” Inman Grant reported, and in others, X provided inaccurate information. The report explained:

“Twitter/X did not respond to a number of key questions including the time it takes the platform to respond to reports of child sexual exploitation; the measures it has in place to detect child sexual exploitation in livestreams; and the tools and technologies it uses to detect child sexual exploitation material. The company also failed to adequately answer questions relating to the number of safety and public policy staff still employed at Twitter/X following the October 2022 acquisition and subsequent job cuts.”

X did not respond to Ars’ request to comment.

Back in February when the Australian watchdog first issued then-Twitter a compliance notice, Twitter Safety boasted on the platform that Twitter was “moving faster than ever to make Twitter safer and keep child sexual exploitation (CSE) material off our platform.” Last month, that account—which is now called X Safety—posted that “there is no place in this world or on X for the abuse of children,” claiming that “over the past year we have strengthened our policies, deployed new automated technology, and increased the number of cybertips we send to” the National Center for Missing and Exploited Children.

That post also said that X has taken “action on five times as much content” as the platform did in 2022, noting that “95 percent of the accounts we suspend we find before any user reports,” which was “up from 75 percent.” Australia’s report clarified that before Musk acquired Twitter, the platform was proactively detecting 90 percent of CSAM, but after mass layoffs, the amount of proactive CSAM detection fell to 75 percent, and X failed to specify to eSafety how much it has improved since then.

According to eSafety, X was also one of the only major tech companies that did not provide “median response times to user reports of child sexual exploitation material.” X was joined only by Google—which today was issued a formal warning by the Australian regulator, but not fined—in failing to provide this critical metric, obscuring how quickly X responds to and removes CSAM. Some platforms told eSafety that varied CSAM reporting methods—like providing buttons to flag some content but requiring webforms to be submitted to report other content—made it harder to track that metric. It seems, then, possible that platforms moving to a single reporting method for CSAM platform-wide could aid transparency in the future. One of eSafety’s goals in enforcement of the Online Safety Act is to surface solutions like that, which could help platforms evolve safety measures over time.

While X’s failure to respond to some of eSafety’s questions was concerning, some of X’s responses that were provided were also troubling to the commissioner, including responses confirming that X does not invest in technology to prevent child exploitation on livestreams or detect grooming on the platform. To the former, X said that users are vetted before they can launch livestreams, but X seemed to skirt responsibility for the latter, saying that “children are not our target customer, and our service is not overwhelmingly used by children.”

Although X may not attract many users under 13—which is the minimum age to sign up—X CEO Linda Yaccarino has recently confirmed that Generation Z (ages 11 to 26) “was the company’s fastest-growing demographic, with 200 million teenagers and young adults in their 20s visiting the platform each month,” The New York Times reported. X has also previously reported to advertisers that “nearly half of all Tweets sent over the course of the year in the US came from Twitter users aged 16 to 24.” It therefore seems possible that X’s failure to detect grooming behaviors could impact some young Gen Zers joining the platform, which troubled the commissioner.

eSafety’s report said that X also does not use “any tools” to detect known CSAM videos—which could help the platform quickly remove illegal content that has already been flagged—on public posts or in direct messages, because X has “been developing the technology needed to support this.” That internal hash-matching technology was initially projected to launch in April 2023, but X told the commissioner that it is still in development, promising that X “anticipates being able to make this available” for public posts “imminently.” It appears that technology to address known CSAM shared in direct messages may take longer, though, with X confirming that for now, it’s taking “alternative steps such as using ‘a range of behavioral signals, in addition to content signals, to determine if accounts are violating our terms of service, as well as user reports.’”

Inman Grant told The Sydney Morning Herald that X’s response to the compliance notice was “disappointing,” while more broadly confirming that “it was a hard slog to get what we needed” from all the major tech companies, which she said all need “to do better.”

“Twitter/X has stated publicly that tackling child sexual exploitation is the No. 1 priority for the company, but it can’t just be empty talk, we need to see words backed up with tangible action,” Inman Grant said.

Google gets formal warning

X wasn’t the only tech company disappointing eSafety today. Google was issued a formal warning for failing to provide requested information, with eSafety’s press release saying that Google only provided “a number of generic responses to specific questions” and “aggregated information when asked questions about specific service.”

According to Inman Grant, there are only two reasons why tech platforms would evade compliance with Australia’s transparency measures.

“If Twitter/X and Google can’t come up with answers to key questions about how they are tackling child sexual exploitation they either don’t want to answer for how it might be perceived publicly or they need better systems to scrutinize their own operations,” Inman Grant said. “Both scenarios are concerning to us and suggest they are not living up to their responsibilities and the expectations of the Australian community.”

Particularly concerning to the Australian watchdog was a finding that “Google is not using its own technology”—a free tool called CSAI Match—”to detect known child sexual exploitation videos on some of its services—Gmail, Chat, Messages.”

In the report, the commissioner acknowledged that one intervention tool that works well on one platform “may not be as effective or appropriate on another.” However, it seems odd that Google wouldn’t have developed tech for its own services. Inman Grant told the Sydney Morning Herald that she was “particularly surprised that Google was as resistant and unable to answer the questions, given their maturity, their resourcing, their vocal messaging around the development of useful video technologies.”

Inman Grant suggested that any resistance from platforms like Google or X to provide adequate responses was likely an attempt to save face.

“We understand that it’s hard and it’s probably very confronting and exposing for these companies to actually say, ‘well … we have said this is our top priority, but really, we’re not doing anything,’” Inman Grant told The Guardian.

Google did not clarify whether it’s developing tech for its services lacking tools to detect known CSAM, instead telling the commissioner during the reporting phase that:

“The fight against CSAM is a difficult one and perpetrators are sophisticated and work hard to constantly bypass the systems. This means that we are constantly investing in updating our technology to ensure that it continues to be precise and effective. While Google remains committed to continuing to develop new products and features in ways that help keep children safe and at the same time preserve user security and privacy, it does not comment on future product plans.”

Google has expressed disappointment that eSafety issued a formal warning, noting that the company responded in supposed good faith to inquiries and has developed CSAI Match and other tools to help organizations prioritize removing CSAM. In a statement, Google’s director of government affairs and public policy in Australia, Lucinda Longcroft, told Ars that “protecting children on our platforms is the most important work we do.”

Longcroft said that Google remains “committed to these efforts and collaborating constructively and in good faith with the eSafety Commissioner, government, and industry on the shared goal of keeping Australians safer online.”

For the Australian watchdog, these reports are intended to increase transparency of how well the world’s largest platforms are working to stop reportedly rising sexual extortion of children online, not just in Australia, but globally. After releasing this week’s report, the commissioner warned that it now seems clear that “some of the biggest tech companies aren’t living up to their responsibilities to tackle the proliferation of child sexual exploitation, sexual extortion, and the livestreaming of child sexual abuse.”

Inman Grant warned platforms that Australia would continue holding them accountable by shedding light on platforms’ “serious shortfalls.”

“We really can’t hope to have any accountability from the online industry in tackling this issue without meaningful transparency, which is what these notices are designed to surface,” Inman Grant said.

————————————————


Source link

National Cyber Security

FREE
VIEW