Twitter has begun to acknowledge that it has persistent problems with malicious automation and spam accounts and it has been working on ways to fix these issues. Today in a blog post, the company shared some of its efforts as well as the progress it has made in some areas. First, it said that its machine learning tools have allowed it to spot more automated accounts without the need to rely on reports from others. Last month, Twitter said its systems "identified and challenged more than 9.9 million potentially spammy or automated accounts per week." A pace that's up from 6.4 million in December and 3.2 million last September. At the same time, reports of spam dropped from around 25,000 per day in March to approximately 17,000 per day in May.
Going forward, Twitter will start limiting the reach of accounts it has labeled as possibly spammy. Once its systems have detected such an account, Twitter will remove it from follower figures and engagement counts, put a warning on it and keep new accounts from following it until it has passed some sort of verification, such as providing a phone number. "We think this is an important shift in how we display tweet and account information to ensure that malicious actors aren't able to artificially boost an account's credibility permanently by inflating metrics like the number of followers," said Twitter.
Twitter is also going to make it more difficult for spam accounts to be created in the first place. New accounts will soon have to verify either an email address or a phone number when they're being created. "We will be working closely with our Trust & Safety Council and other expert NGOs to ensure this change does not hurt someone in a high-risk environment where anonymity is important." The company said that this feature will roll out later this year. Existing accounts aren't off the hook though. Twitter is also conducting an audit and it's challenging any accounts it suspects might be spam.
Additionally, when accounts display activity that might denote malicious behavior, such as high-volume tweeting using the same hashtag or tweeting at the same handle without receiving a reply, Twitter will subject those accounts to tests. They could be something as simple as a reCAPTCHA completion or a password reset request, or the accounts could be sent to Twitter staff for review.
In past months, Twitter has asked experts for ideas on how to improve the health of conversations on its platform, announced a study that will explore whether publicizing its rules might affect the civility of conversations and begun hiding tweets that it believes to be from trolls. This month it also bought a company that specializes in spam and abuse issues.
"But we know there's still a lot of work to be done," Twitter said in its blog post. "Inauthentic accounts, spam and malicious automation disrupt everyone's experience on Twitter, and we will never be done with our efforts to identify and prevent attempts to manipulate conversations on our platform."