• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Twitter Mutes Trolls


Active member

Twitter Mutes Trolls
The Washington Post, November 15, 2016
Twitter introduces a mute button for trolls
as it struggles to fight online abuse
Twitter took another step Tuesday in its long fight against trolls, announcing that it will give users a mute feature to weed out harassing words and phrases from their notifications.

The company plans to roll out the feature more broadly over time, Del Harvey, Twitter’s vice president of trust and safety, said in an interview with The Washington Post. Adding it to notifications was a priority based on the feedback from Twitter users.

“We’ve heard from users that this [notifications] is an area where people don’t feel as though they have as much control on Twitter,” Harvey said. “You’re not searching for this content, but it’s still something that’s coming in to your Twitter experience.”

With the new feature, Twitter users will be able to compile their own list of words, phrases and emoji that they don’t want to see pop up in their notifications from the network.

[Instagram will soon let you filter comments on your own account]

The feature is similar to one Instagram rolled out earlier this year that let users block comments that contained certain phrases. Twitter's move comes after an election cycle that saw, among other incidents, a prominent anti-Semitic movement on the network — it has been repeatedly criticized for not moving quickly enough to combat harassment on its site.

Harvey, who has been at Twitter for eight years, said she knows that some users are frustrated with the pace at which it’s addressing harassment issues. “We haven’t always moved as quickly as we would like or done as much as we would like,” she said, adding that the company is trying to make sure that it has tools to shut down harassment on the site without crossing the line into limiting speech. “We have tried to be thoughtful, to make sure we don’t have unintended and negative consequences,” Harvey said.

Twitter has struggled for years with striking the right balance between protecting open expression on the network and protecting victims of harassment, often fielding heavy criticism for erring too far on the side of free speech. Harvey acknowledged that there were plenty of prime examples of Twitter’s shortcomings when it came to policing harassment on the network during this year’s election.

[What to expect from Donald Trump’s tweet-shaped presidency]

The addition of the mute feature wasn’t driven by the dialogue around the election itself, Harvey said, but it did underscore for Twitter how much further it still has to go. Harvey said that Twitter has also made some changes to the way that users can report harassment on the site to better reflect its policies.

Last year, Twitter explicitly banned “hateful conduct” — which prohibits the promotion of violence and direct attacks or threats to others on the basis of race, ethnicity and a number of other attributes. The social network has now updated the language it uses in its harassment reporting tool to reflect that policy and inform users that they can report others for that kind of behavior. Twitter has also made it easier for bystanders to report abuse — so person B can report that person C is harassing person A.

Finally, Harvey said, Twitter is continually training its staff across the globe to recognize more forms of abuse. Twitter staff members review each report of abuse to determine whether it violates the company’s policies. It’s common to see Twitter users posting incredulous screenshots of notices from Twitter’s abuse team rejecting abuse reports, even when they clearly violate Twitter’s rules. (Yes, Harvey sees them, too.)

The Switch newsletter

The day's top stories on the world of tech.

[Anti-Semitic Trump supporters made a giant list of people to target with a racist meme]

Those oversights are upsetting, she acknowledged, and often happen because a reviewer doesn’t have the cultural context to understand why something may be offensive or abusive. Something that’s obviously offensive to an American may not be so obvious to a person born in India, and vice versa, she explained.

So, in addition to an ongoing training course on how to recognize abusive behavior, Harvey said, Twitter has put together a training program on the historical and cultural context around particular types of harassment — types of anti-Semitism, for example. She said Twitter will also focus more closely on keeping its staff up to date with the evolving language of hate on its network.

Harvey made clear that she knows Twitter could still be doing more to protect its users and said that she hopes the company will be able to update its tools and policies on harassment more frequently than it has in the past.

“I’m definitely not saying that we’re never going to get it wrong again, or that everything is fixed,” she said. “We will still get it wrong. But we’ll take those instances and use them to real-time course-correct.”

Hayley Tsukayama covers consumer technology for The Washington Post.
Follow @htsuka

November 15, 2016