"Language that makes someone less than human can have repercussions off the service, including normalizing serious violence," said Del Harvey, Twitter's VP of trust and safety. And she said that though the hateful conduct policy already in place catches some dehumanizing language, not all of it is in violation of that policy. "But there are still tweets many people consider to be abusive, even when they do not break our rules."
Twitter recently came under fire for its hesitation to take any action against InfoWars and Alex Jones though many other platforms were removing their content from their sites. At one point, the company said that neither were violating its rules and would therefore continue to be allowed on the site. While Twitter did ultimately ban them, it first admitted they had broken its rules and chose to keep their accounts intact anyway. And that refusal to implement its own rules puts the effectiveness of Twitter's regulations, including this new one, into question.
Twitter defines dehumanization as language that "treats others as less than human," like when a person is compared to an animal or a virus. And identifiable groups include those that are distinguished by shared characteristics such as race, national origin, sexual orientation, gender, political beliefs and social practices. The company is giving users until October 9th to fill out a survey about the proposed policy, which can be found here.
The survey asks how you would rate the clarity of the policy, how it can be improved and for "examples of speech that contributes to a healthy conversation, but may violate this policy." Once the survey closes, the policy will then go through the regular development processes at the company.
Twitter says this is part of its effort to promote healthy conversations on its platform. Earlier this year, it asked outside experts for help in that regard.