Backgrounder: US Senate Judiciary Committee to Grill Tech CEOs on Child Safety | #childsafety | #kids | #chldern | #parents | #schoolsafey

[ad_1]

Logos of social media companies testifying before the US Senate Judiciary Committee on Jan. 31, 2024.

Tech executives from five major social media platforms are headed to the US Capitol at the end of the month to testify about their platforms’ efforts to protect children from sexual exploitation online. The hearing, which will take place before the Senate Judiciary Committee on Jan. 31, 2024, will feature the CEOs of Discord, Meta, Snap, TikTok, and X (formerly Twitter), as listed below:

  • Jason Citron, CEO of Discord
  • Mark Zuckerberg, CEO of Meta
  • Evan Spiegel, CEO of Snap
  • Shou Zi Chew, CEO of TikTok
  • Linda Yaccarino, CEO of X

While the CEOs of Meta and TikTok agreed voluntarily to testify, CEOs from Discord, Snap, and X were subpoenaed after what the Committee described as weeks of repeated refusals to appear. This included a failed attempt by the US Marshals Service to deliver a subpoena at Discord’s office, according to the Committee. Notably, there is no representative from Google or YouTube scheduled to participate as a witness at the hearing.

The hearing occurs against the backdrop of what appears to be growing momentum to pass child online safety legislation. In 2023, the Kids Online Safety Act (KOSA) and Children and Teens’ Online Privacy Protection Act (COPPA 2.0) both advanced out of the Senate Commerce Committee without opposition. Hauling some of tech’s most high-profile figures before Congress to discuss how to stamp out child sexual exploitation online, a bipartisan problem with near-universal agreement on its merits, could be a political strategy to rally support for proposed legislation.

The Senate Judiciary Committee also hosted a hearing last February on protecting children online, which featured witnesses from the National Center for Missing & Exploited Children (NCMEC), the American Psychological Association (APA), social media reform advocates, and more. Since last year’s hearing, in addition to KOSA and COPPA 2.0, Senators have introduced a number of bills designed to prevent the exploitation of kids online, including:

  • The STOP CSAM Act: This bill would allow victims of online sexual exploitation to sue social media platforms that promoted or facilitated the abuse, make it easier for victims to ask tech companies to remove child sexual abuse material (CSAM), and strengthen the CyberTipline reporting requirements. The bill advanced to the Senate by Unanimous Consent on May 11, 2023.
  • The EARN IT Act: The Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act) would establish a “National Commission on Online Child Sexual Exploitation Prevention” and amend Section 230 of the Communications Decency Act to narrow the liability protection it affords platforms for claims related to CSAM. Several versions of this bill have been introduced by the Senate Judiciary Committee going back to 2020, all passing the Senate Judiciary Committee unanimously; the amended 2023 version advanced on May 5, 2023.
  • The SHIELD Act: The Stopping Harmful Image Exploitation and Limiting Distribution (SHIELD) Act would establish federal criminal liability for both individuals who distribute “intimate visual depictions,” or nudes, without a person’s consent, and fill in gaps in existing law to prosecute persons sharing explicit images of children. The bill was approved by voice vote in the Senate Judiciary Committee on May 11, 2023.
  • The Project Safe Childhood Act: This bill modernizes the investigation and prosecution of online child exploitation crimes, allowing federal prosecutors and law enforcement to work together using new technology to quickly rescue child victims and arrest offenders. The bill passed the Senate on Oct. 24, 2023,, but presently there is no companion bill in the House.
  • The REPORT Act: This bill makes changes to rules governing the reporting of crimes involving the online sexual exploitation of children by requiring electronic communication service providers to submit reports to the National Center for Missing and Exploited Children (NCMEC) when they become aware of violations. It also extends the time providers must preserve the contents of reports from 90 days to one year, increases statutory penalties for knowing and willfully failing to report CSAM, and expands reporting duties to include child sex trafficking and coercion and enticement of minors. The REPORT Act passed the full Senate on Dec. 14, 2023, but presently there is no companion in the House.

In advance of the hearing, Tech Policy Press collected information on each platform that will appear, including key background on the companies and their respective policies on child online safety, including what steps they take to counter child exploitation and sexual abuse material, reporting options, known detection technologies in use, and transparency efforts. Our review also includes a section on recent controversies related to child safety that have embroiled each of the social media platforms headed to Capitol Hill.

Discord

  • Founded: 2015
  • Owner: Discord, Inc.
  • Leadership: Jason Citron, CEO

What it is: Discord is an ad-free, decentralized social messaging platform that describes itself as the easiest way to talk over voice, video, and text to stay in touch with friends and communities. It’s known as a hub for gamers, but many other types of organizations and individuals use the service, too.

Policy: Discord has a clearly stated policy against CSAM, including “including AI-generated photorealistic CSAM.” It also has “a zero-tolerance policy for inappropriate sexual conduct with children and grooming,” and special attention is paid to predatory behaviors such as online enticement and “sextortion.”

Key CSAM detection technology: PhotoDNA and Discord’s open-source image-analysis AI model.

How to report policy violations: On Discord, users can report inappropriate messages or content in-app by holding down the message on mobile or right clicking the message on desktop, then selecting ‘Report Message.’ Discord also has a ‘Parent Hub’ that provides resources to guardians as well as educators, including how to contact the platform over policy violations.

Transparency: Discord reports on child online safety, reports, account and server removals related to CSAM, child exploitation, and other child safety issues in its transparency reports. While it is not straightforward to distinguish between the relative prevalence of different child safety issues from these reports, in Q4 there were 416,036 reports related to “child safety,” and 191,779 related to “regulated or illegal activities.” Discord also says it has a team that focuses solely on child safety and a dedicated engineering team for its safety efforts.

Documented issues and concerns

“Sextortion” and child grooming in hidden communities: A June 2023 investigation by NBC News revealed how adult users on Discord were using chat rooms and hidden communities to groom children before abducting them, trade CSAM, and trick minors into sending them nude images, also called “sextortion.” The report found that since the online gaming hub’s founding in 2017, communications on Discord were involved in more than 35 cases where adults were prosecuted on charges of kidnapping, grooming, or sexual assault. Another 165 prosecuted cases, including four crime rings, involved adults transmitting or receiving CSAM or “sextorting” children via Discord, according to the report.

The head of the tipline at the Canadian Center for Child Protection (C3P) told NBC News that these findings were “just the tip of the iceberg,” and the group had seen an increase in reporting of child luring on Discord. Predators also often connected with children on other platforms like Roblox or Minecraft, then moved direct communications to Discord for its “closed-off environment.” Following the NBC News Report, Discord banned teen dating servers as well as AI-generated CSAM.

Months after the investigation, in October 2023, Discord’s Head of Trust and Safety John Redgrave sat for an interview with Semafor, where he discussed the risks generative AI posed regarding the rapid proliferation of CSAM online and how Discord transitioned from a generalist team to one with dedicated teams around law enforcement engagement and technologies for CSAM detection. While this technology is predominantly used for still images, it can be extended to video with enough effort, according to Redgrave. “Engineers can do magical things,” he told Semafor. It’s unclear if progress has been made on this front.

Related Reading

Meta

  • Founded: 2004
  • Products: Facebook, Instagram, WhatsApp
  • Leadership: Mark Zuckerberg, CEO

What it is: Facebook is a social networking site where users can share information and media with friends and family. Instagram is a video and photo-sharing app that allows users to edit with filters, organize by hashtags, tag by geography, and more. Meta also owns and operates Threads, WhatsApp, and other subsidiaries.

Policy: Meta’s “Child Sexual Exploitation, Abuse, and Nudity” policy states it does not allow “content, activity or interactions that threaten, depict, praise, support, provide instructions for, make statements of intent, admit participation in or share links of the sexual exploitation of children (including real minors, toddlers or babies or non-real depictions with a human likeness, such as in art, AI-generated content, fictional characters, dolls, etc).” It provides content warning screens for certain non-sexual acts of child abuse by law enforcement or military personnel or imagery posted by news agencies depicting child nudity in the context of famine, genocide, war crimes, or crimes against humanity.

CSAM detection technology: PDQ and TMK+PDQF, PhotoDNA, Content Safety API, Take It Down

How to report policy violations: Facebook encourages users to first contact local law enforcement, then to report the photo or video to Facebook directly by expanding it to full-size and selecting report in the overflow menu. It also asks users to notify the NCMEC using the CyberTipline and emphasizes not sharing, downloading, or commenting on the content.

Transparency: Meta issues a quarterly Community Standards Enforcement Report, which has a section on “Child Endangerment: Nudity and Physical Abuse and Sexual Exploitation.” In its Q3 report, replete with several interactive data visualizations, it highlighted its 99 percent “proactive rate” for CSAM takedowns, up from 97 percent in Q2. It also publishes blog posts on occasion highlighting how Meta ‘prevents child exploitation’ in its apps.

Documented issues and concerns

SG-CSAM on Instagram. In June 2023, the Stanford Internet Observatory (SIO) published a report that identified a large network of social media accounts, purportedly run by minors, openly selling self-generated child sexual abuse material (SG-CSAM). Instagram was the preferred platform for buyers and sellers, and its recommendation algorithms were a key reason for the network’s effectiveness, the report found. Meta responded by setting up an internal taskforce to investigate the reports’ claims. It also removed the feature overriding CSAM warnings, which allowed potential buyers to “see results anyways.” Despite these changes, SIO researchers months later found that the network’s tactics had since evolved, and mitigating a rapidly adaptive SG-CSAM network required proactive attention, with human investigators best suited to mitigate these shifts.

State of New Mexico sues Meta. In December 2023, New Mexico Attorney General Raúl Torrez filed a 225-page lawsuit against Meta and its CEO, Mark Zuckerberg, for allegedly failing to protect children from sexual abuse, online solicitation, and human trafficking. The suit, which called Facebook and Instagram a “breeding ground” for child predators, argues that the flawed design of Instagram and Facebook led to the recommendation of CSAM and child predator accounts to minor users. In response to the filing, a Meta spokesperson told Bloomberg Law that it “disabled more than 500,000 accounts in August 2023 for violating child safety policies” and “sent 7.6 million reports of child exploitation to the National Center for Missing and Exploited Children in the third quarter” of 2023.

The court released a mostly unredacted version of the New Mexico Attorney General’s suit in January 2024. New information then emerged of the extent to which Meta knew about the volume of inappropriate content shared between adults and minors, as laid out in an internal 2021 Meta presentation that estimated 100,000 children received online sexual harassment, such as pictures of adult genitalia, each day on its platforms. The filing also revealed that Meta knew its platforms were popular with children as young as six years old, and leveraged this to achieve its goals that Facebook Messenger become the primary messaging app for kids by 2022. Meanwhile on Instagram, Meta employees flagged that ‘sex talk’ targeted at minors via direct messages was 38 times more prevalent than on Messenger.

Congressional appearances

Meta’s former Directory of Global Safety testifies about mental health harms. Meta’s Director of Global Safety, Antigone Davis, testified before the US Senate Commerce, Science, and Transportation Committee on Sept. 30, 2021. The hearing, titled “Protecting Kids Online: Facebook, Instagram, and Mental Health Harms,” was in response to whistleblower documents revealing Facebook researchers knew its Instagram app was negatively impacting teenage girls’ body image and exacerbating mental health issues, despite top Facebook executives previously denying this before Congress. The hearing was about child online safety more generally, rather than child sexual abuse material on Meta’s platforms.

Related Reading

Snapchat

  • Founded: 2011
  • Owner: Snap Inc.
  • Leadership: Evan Spiegel, CEO

What it is: Instant messaging app that allows users to exchange pictures and videos, or ‘snaps,’ that are designed to disappear after viewed.

CSAM detection technology: PhotoDNA, Google’s Child Sexual Abuse Imagery Match

Policy: Snapchat prohibits the sharing of child sexual exploitation or abuse imagery, grooming, or sexual extortion (“sextortion”) and reports all instances to authorities. Sexual exploitation “may include sex trafficking; efforts to coerce or entice users to provide nudes; as well as any behavior that uses intimate imagery or sexual material to pressure or threaten” users. Communications intended to coerce minors with the intent of sexual abuse, or “which leverages fear or shame to keep a minor silent,” is also banned by Snapchat.

How to report policy violations: Snapchat offers in-app reporting.

Transparency: Snapchat publishes its Transparency Reports twice per year in accordance with the European Union’s Digital Services Act, which went into effect for Very Large Online platforms in late 2023. Snapchat has been publishing transparency reports since 2015, and expanded them to include reports of CSAM in June 2020. In its transparency report covering the first half of 2023, Snapchat said it “proactively detected and actioned 98 percent of the total child sexual exploitation and abuse violations reported here — a 4% increase from the previous period.” Nearly 230,000 accounts were allegedly deleted over Child Sexual Exploitation and Abuse Imagery (CSEAI) in the six-month period.

Documented issues and concerns

CSAM video on the platform. In 2020, the national parent group ParentsTogether launched a petition demanding Snapchat expand the use of its CSAM scanning technology to include videos in addition to images. At the time of the petition, 92 percent of Snapchat’s user base was between the ages 12 to 17 and 1.4 billion videos were viewed on the app each day. More than 100,000 parents signed the petition, which arose from a slew of incidents involving “sextortion” and videos depicting sexual abuse of minors. In response to the petition, Snapchat expanded its screening to include videos in August 2020. Still, a ParentsTogether report published in 2023 found that “Facebook, Instagram, and Snapchat were the most named platforms for child sexually explicit requests and sextortion.”

My AI launch. Last February, Snapchat launched My AI, a friend-like chatbot that runs on OpenAI’s GPT technology. Within months, researcher tests found My AI giving advice to a 13-year-old Snapchat Plus user on having sex with an adult she met on the app and how to lie to her parents about it. After Snapchat was accused of using children as AI test subjects, and responding with a promise to add parental controls around the chatbot, it did—nearly a year later. These parental controls included limiting My AI from responding to chats with their teens, among others.

NCOSE alleges Snap is profiting from sexual abuse and exploitation. Snapchat later landed on the National Center on Sexual Exploitation’s “Dirty Dozen” list for 2023. The list is part of a decade-long annual campaign that selects the top twelve mainstream tech entities “facilitating, enabling, and even profiting from sexual abuse and exploitation” on their platforms. The NCOSE moved Snapchat to its “Watch List” in September 2023 after the company made changes in response to the list. These included protecting teens from contacting strangers, restricting teens from viewing sexually suggestive and explicit content, and providing additional in-app resources.

Congressional appearances

Snap Vice President of Global Public Policy Jennifer Stout testified before the US Senate Committee on Commerce, Science and Transportation for a hearing on “Protecting Kids Online” on Oct. 26, 2021. Stout framed Snapchat as the “antidote to social media” because it focuses on connecting people who already know each other, and said it focused on privacy by making images and messages delete by default. While Stout told Congress that Snap believes social media regulation is necessary, the rate of technological development means “regulation alone can’t get the job done.”

Related reading

TikTok

  • Founded: 2016
  • Owner: ByteDance
  • Leadership: Shou Zi Chew, CEO

What it is: TikTok is a short-form video app that allows users to create, watch, and share videos ranging from 15 seconds to three minutes long.

CSAM detection technology: PhotoDNA, Content Safety API, CSAI Match

Policy: TikTok says it has zero tolerance for child sexual abuse and sexualized content of minors (any person under the age of 18). Sexualized content of minors or Child Sexual Abuse Material (CSAM) is any visual, textual, and audible depictions or production of explicit or inferred child sexual assault and child exploitation. Searching for, viewing, creating, and sharing this content is illegal and places minors and society in extreme harm.

How to report policy violations: Either in-app, via generalized reporting form, or through trusted partners organizations INHOPE, Internet Watch Foundation (IWF), and the National Center for Missing & Exploited Children (NCMEC).

Transparency: TikTok biannually produces a number of different reports broken down by topics like the ‘Digital Services Act,’ ‘Information Requests,’ and ‘Government Removal Requests,’ among others. In its first DSA Transparency Report, published in October 2023, TikTok self-reported that it took 1,740 actions against illegal ‘child sexual exploitation’ content.

Documented issues and concerns

Third-party moderators exposed to CSAM content. In August 2022, Forbes reported that content moderators hired to train TikTok’s AI to spot the “worst of the worst” posted on its app, were shown graphic images and videos of child sexual exploitation in their job training exercises. Employees of the third-party moderation outfit Teleperformance claimed that TikTok asked them to review a “Daily Required Reading” spreadsheet replete with material in violation of TikTok’s community guidelines. The document allegedly contained numerous images of children naked or being sexually abused, and was easily accessible to at least hundreds of TikTok and Teleperformance employees. While both companies denied claims that its training material contains CSAM, and TikTok said its training material has strict access controls, little to no clarification was provided on what these tools and controls look like.

Enabling CSAM through specific products and features. A Forbes investigation found that TikTok’s post-in-private accounts made it seamless for predators and underage victims of sexual exploitation to meet and share illegal images. These product features, on top of “major moderation blind spots” like easy banned account workarounds, highlighted how TikTok struggles to enforce its own “zero tolerance” policy for CSAM, according to the report.

Other TikTok product features have come under fire, prompting investigations by the US government. In 2022, it came to light that child predators were misusing TikTok’s ‘Only Me’ video setting, which allows users to save TikTok clips without posting them publicly online by posting CSAM videos to ‘Only Me’ and sharing their private accounts’ passwords. And early last year, a Wall Street Journal investigation found that TikTok’s algorithm was recommending more of the same videos to adults who watch videos of young people, making it ripe for sexual exploitation. Two weeks after the Wall Street Journal report, TikTok rolled out a series of new features designed for teens and families. This included automatically setting sixty minute time limits for users under 18, allowing parents to set a mute notification schedule, and creating a ‘sleep reminder’ feature. It highlighted how these features “add to our robust existing safety settings for teen accounts,” including accounts for users under 16 being ‘private by default’ and limiting direct messaging to users 16 and older.

Congressional appearances

TikTok’s CEO Shou Chew is no stranger to Capitol Hill, appearing before the House Energy and Commerce Committee on March 23, 2023. The hearing, titled ​​”TikTok: How Congress Can Safeguard American Data Privacy and Protect Children from Online Harms,” focused on consumer privacy and data security practices’ impact on children, among other topics.

In October 2021, TikTok’s Vice President and US Head of Public Policy Michael Beckerman appeared before the US Senate Committee on Commerce, Science and Transportation for a hearing on “Protecting Kids Online,” alongside Snapchat and YouTube executives. Beckerman distinguished TikTok from other social media platforms due to its “direct communication” focus and emphasis on “uplifting, entertaining content.” Lawmakers in turn grilled Beckerman over whether TikTok’s ownership will leave American consumer data exposed to the Chinese Government if asked.

Related Reading

X (formerly Twitter)

  • Founded: 2006
  • Owner: Elon Musk
  • Leadership: Linda Yaccarino, CEO

What it is: X is currently a social media networking app where users share their thoughts by posting ‘posts’ (formerly ‘tweets’) that contain text, videos, photos, or links.

CSAM detection technology: PhotoDNA, “Internal tools”

Policy: X’s official ‘child sexual exploitation policy’ was published on its websites in October 2020. It states that, “X has zero tolerance towards any material that features or promotes child sexual exploitation,” including in media, text, illustration, and computer-generated images. It also applies to content that “may further contribute to victimization of children through the promotion or glorification of child sexual exploitation.”

The policy was later updated, in December 2023, to reflect its work to address child sexual exploitation online. The update states that the service automated its monitoring and NCMEC CyberTipline reporting of CSAM material in images, videos, and gifs using hash-matching technology. It now automatically suspends and deactivates content and user accounts and sends reports “without human involvement.” X also blocks users from searching for common child sexual exploitation (CSE) terms.

How to report policy violations: X provides a form for users to report “a child sexual exploitation issue.” Users can also report child safety concerns, including child sexual exploitation, grooming, physical abuse, and underage users, directly from a post, ad, list, or user profile. In the majority of cases, X says violations will result in immediate and permanent suspension, although there is an appeals form available.

Transparency: X’s ‘Transparency Reports’ center was first launched in 2012, covering a range of topics like ‘Information Requests,’ ‘Copyright Notices,’ ‘Removal Requests,’ and more. The last report in that format was published in 2022. The most recent report, and first under Elon Musk’s ownership, was published to remain in compliance with the Digital Services Act, which has taken square aim at X for its unwillingness to cooperate with its transparency mandates. The DSA compliance report was published by X last October, listing ‘child sexual exploitation’ (CSE) policy violations, among others, by detection method, type of enforcement, and country.

Documented issues and concerns

Monetizing adult content on the site. In the spring of 2022, running parallel with then-Twitter’s sale to Elon Musk, the social media company quietly considered giving users the ability to monetize adult content posted to its site. According to an investigation by The Verge, Twitter hoped to compete directly with OnlyFans, a hub for adult-content creators valued that year at $2.5 billion. While some worried about backlash from advertisers, what eventually killed the plan was far more concerning: Twitter could not accurately detect child sexual exploitation and non-consensual nudity at scale, according to its own Red Team. “If Twitter couldn’t consistently remove child sexual exploitative content on the platform today, how would it even begin to monetize porn,” asked the report’s authors.

Layoffs to Trust and Safety Team. In October 2022, Elon Musk officially acquired Twitter (since renamed X) for a whooping $44 billion after a months-long takeover involving a lawsuit, whistleblower complaint, and tumultuous transition. Within a month, and despite Musk stating that removing child exploitation on the platform was his number one priority, the team responsible for reporting CSAM was gutted, going from 20 team members down to fewer than ten, according to Bloomberg. Additional reporting from Wired further revealed that just one full-time employee remained on the team dedicated to removing CSAM across Japan and the Asia-Pacific region.

A former employee from Twitter’s Singapore office told Channel News Asia that the staff cuts were tied to the social media company’s shift to automate many of its child safety measures. This included automatically removing posts that are “flagged by ‘trusted reporters,’ or accounts with a track record of accurately flagging harmful posts.” The ex-employee told the news site the accuracy rate of these reports is “about 10 percent across all types of policy violations.” These automated processes also often lack the nuance needed to identify “benign uses” of language associated with child sexual exploitation, track down the creation of new CSAM, and identify the sexual solicitation of a minor.

In December 2022, Musk disbanded the company’s Trust and Safety Council, a group of volunteers who offered the company expert advice about online safety, including on child sexual exploitation.

Sale of CSAM on the platform. In January 2023, an NBC News investigation found that “at least dozens of accounts have continued to post hundreds of tweets in aggregate using terms, abbreviations and hashtags indicating the sale of what Twitter calls child sexual exploitation material.” Some tweets and accounts remained up for months, while other accounts “appeared to delete the tweets shortly after posting them, seemingly to avoid detection, and later posted similar offers from the same accounts.” The reporting led Sen. Richard Durbin (D-IL) to ask the Department of Justice to review Twitter’s handling of CSAM and consider whether the company’s conduct merited a further investigation.

The New York Times also published the results of an investigation to assess Twitter’s efforts to remove CSAM in February 2023. It found that child sex abuse imagery was widely circulating on Twitter, including a video showing a boy being sexually assaulted with over 120,000 views. The Times report noted that CSAM was not only easy to find, but was actively promoted by Twitter through its recommendation algorithm.

Researchers from the Stanford Internet Observatory (SIO) later found dozens of images on Twitter that had been previously flagged as CSAM using PhotoDNA, a database that social media companies use to screen content posted to their platforms. Twitter failed to take action on these images upon upload, and “[i]n some cases, accounts that posted known CSAM images remained active until multiple infractions had occurred.” Although Twitter fixed the database issue on May 20, 2023, the change came only after the researchers were able to notify Twitter through a third-party intermediary, as they could not locate a contact with Twitter’s Trust and Safety team.

The SIO researchers also identified 128 accounts advertising the sale of self-generated CSAM (SG-CSAM) on Twitter; many of them appear to be operated by minors. Most accounts were taken down within a week, but even after reporting the accounts to government officials, 22 out of the original 128 were still active a month later. Twitter’s recommendation system also offered two to three related accounts that may also be “sellers” of SG-CSAM. As the researchers note, given “that nudity is allowed on Twitter makes it more likely that explicit and illegal material may be posted or distributed before the account is suspended.

Related Reading

[ad_2]

————————————————


Source link

National Cyber Security

FREE
VIEW