Discord introduced a new Warning System to help users understand how they violated rules and how it impacts their account so they can learn “to be better digital citizens,” and is launching new features to protect teenage users after years of controversy, including child predator cases.
The new Warning System has “multiple touchpoints” so Discord users can see what rules they violated and how it impacts their account.
Discord said the touchpoints are more transparent than bans because users can understand how they broke the rules.
Rule-breakers will get an in-app direct message from Discord letting them know if they have a warning or violation, then get details on their post that violated the rules.
Now, Discord users will have one of four account statuses: “All Good,” “Limited,” “Very Limited” and “At Risk,” before they are suspended.
The platform will continue to have a zero-tolerance policy for “violent extremism and content that sexualizes children.”
Discord is rolling out its new Warning System worldwide next week. The platform will start limiting certain features for users who break the rules, such as uploading pictures if the user shares an inappropriate photo. A spokesperson for Discord told Forbes that the platform has found that if it tells users how they’ve violated its policies and gives them another chance to correct mistakes, Discord “can encourage better digital citizens.” Users who get a warning can see all the information about violations they’ve had in the past in a new “Account Standing” tab. Savannah Badalich, Discord’s senior director of policy, told The Verge that the platform is giving one-year temporary bans instead of permanent bans for most violations unless they “are extremely harmful.” Discord is also launching Teen Safety Assist which includes filters and alerts to protect teens. Teen users will get a safety alert if they receive a direct message from a user for the first time, and will see automatically blurred media if it is deemed sensitive by Discord.
What To Watch For
Discord’s vice president of product, Ben Shanken, told Fast Company the platform wants to “leave more room for subjectivity” when deciding how to handle violations. “If your friend is just trying to report a message to troll you a little bit, we don’t want that to result in your account getting banned,” Shanken said. Two features of Discord’s new Teen Safety Assist initiative will also be launched worldwide next week. Its sensitive content filters will automatically blur sensitive media in direct messages, group messages, and in servers. Any user can opt into the content filters.
“It allows us to deploy more technology to identify problematic content, but it’s in a way that doesn’t feel like it’s a violation of privacy,” John Redgrave, vice president of trust and safety at Discord, told The Verge about privacy concerns regarding the AI-operated content filters. “This is our way of saying we’re not going to dramatically invade everyone’s privacy, while also providing tools that enrich people’s experience from a safety perspective.”
Pentagon Tightens Classified Information Procedures In Response To Discord Leak (Forbes)
Discord Pushes Forward On Building A Place ‘Where No One Is An Outsider’ With New Accessibility Improvements For Disabled People (Forbes)
Follow me on Twitter or LinkedIn. Send me a secure tip.