select all

Twitter Finally Introduced More Anti-Harassment Tools. Here’s How to Use Them.

Starting today, Twitter is rolling out a feature that will give users a newly expanded mute feature to block harassment. While users have been able to block tweets from singular accounts in the past, this new option will allow for users to omit phrases, key words, and conversations from their feeds entirely, Twitter explained in a blog post today. (If this sounds familiar, Instagram introduced a similar safety measure earlier this year.)

To access the new mute feature, head to “Settings.” From there, look for “muted words” where you can add usernames, words, hashtags, and emoji you don’t want to see in your notifications. It’s convenient if you want to limit the amount of hate speech, slurs, and eggplant emoji you see on Twitter each day, but it’s not a perfect fix. Blocking a given term might keep it out of your notifications, but it’s no guarantee it won’t pop up in your timeline via other users. To mute a given conversation, users will need to tap the options arrow on a tweet in the conversation (where you would also find options like “Mute user” and “Unfollow user”) and choose “Mute conversation.” This feature only works for conversations including your full handle, so if people are talking about you and not tagging you, you’ll have to mute your name as a keyword instead.

Today’s other Twitter announcements include a new “hateful conduct” option users can select as a category when reporting abuse and that the company has doubled down on better preparing its staff to help with abuse. “We’ve retrained all of our support teams on our policies, including special sessions on cultural and historical contextualization of hateful conduct, and implemented an ongoing refresher program,” Twitter explained.

While an effort toward making Twitter a safer platform and less of a hotbed for abuse are to be applauded, the features feel a little bit like too little, too late. Especially given that many heavy Twitter users access the platform via TweetDeck, a secondary platform that already has a mute feature for phrases and terms. (Emoji- and conversation-muting remain unique to Twitter.) And as anybody who has ever used TweetDeck will tell you, muting the problem doesn’t fix it. It just hides some of it from view. Giving users the option to label abuse as “hateful conduct” is great in the sense that it’s more specific than previous options. But the real question is whether or not Twitter’s responses to those reports will change in the coming weeks or whether we can all expect to continue receiving those “we reviewed the content and determined it was not in violation of the Twitter rules” emails, which feel entirely automated and rarely like an actual human took a look at the problem.

How to Use Tools Twitter Should Have Introduced Years Ago