BOOK THIS SPACE FOR AD
ARTICLE ADImage: visuals / Twitter
Twitter has introduced today Safety Mode, a new feature that aims to block online harassment attempts and reduce disruptive interactions on the platform.
Once enabled on a Twitter account, Safety Mode is designed to automatically and temporarily block users for seven days when using harmful language in replies, quote tweets, and mentions in your conversations.
Rolling out to a small group of beta testers
In the beginning, the company will test Safety Mode with a small group of Twitter users in the coming months with plans to further expand the pool of beta testers.
If you were selected as one of the beta testers for this new Twitter feature, you could turn on Safety Mode right now by going into your "Privacy and safety" settings.
"Safety Mode is a feature that temporarily blocks accounts for seven days for using potentially harmful language — such as insults or hateful remarks — or sending repetitive and uninvited replies or mentions," said Jarrod Doherty, Product Lead at Twitter.
"When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier.
"Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be autoblocked."
Since accounts tagged by Safety Mode as harmful because their tweets will be auto-blocked, they will temporarily be unable to follow accounts they targeted, see their tweets, or send them direct messages.
Introducing Safety Mode. A new way to limit unwelcome interactions on Twitter. pic.twitter.com/xa5Ot2TVhF
— Twitter Safety (@TwitterSafety) September 1, 2021Announced in February
Twitter announced it's planning to introduce the Safety Mode feature to automatically block abusive accounts at its Analyst Day presentation on February 25, 2021.
As the company said at the time, Safety Mode "automatically blocks accounts that appear to break the Twitter Rules, and mute accounts that might be using insults, name-calling, strong language, or hateful remarks."
When Safety Mode detects unwanted attention on a user who has it enabled, it will send a notification saying that it "detected some abusive or spammy replies" to one of their tweets.
This new feature is part of a larger Twitter effort to block abuse on its platform, as shown in the company's Q3 2019 earnings letter when Twitter said that it "gave people more control over their conversations on Twitter with the launch of author-moderated replies."
The company added that it improved its ability "to proactively identify and remove abusive content, with more than 50% of the Tweets removed for abusive content in Q3 taken down without a bystander or first-person report."
One year ago, Twitter also introduced a reply-limiting feature which allows all users to choose who can reply to their tweets, thus limiting unwanted replies.