A little more than a week after the election, Twitter is giving some additional insight into the effectiveness of its efforts to curb the spread of election misinformation.
Between October 27, and November 11, the company labeled about 300,000 tweets for “disputed and potentially misleading” content. That accounts for about .2 percent of all election-related posts during that time, according to the company. Of the 300,000 with labels, a much smaller subset — 456 tweets — received more aggressive labels that included a warning users had to click through before they could view the tweet in question. These tweets were also blocked from being retweeted or liked.
The company didn’t indicate how many of those tweets came from Donald Trump, but several dozen of his tweets have had labels applied since he first began tweeting on election night. Twitter did note that most of its labels were applied quickly and that “74% of the people who viewed those Tweets saw them after we applied a label or warning message.”
Overall, these labels lead to a 29 percent decrease in quote tweets, which Twitter says could be a sign that its efforts to reduce the spread of misinformation were successful. However, the company also acknowledged that other steps may not have had the intended effect. For example, a change that removed algorithmic recommendations from users’ timelines didn’t seem to have an effect on misinformation. Twitter said it will reinstate the feature.