Twitter says its algorithms amplify the ‘political right’ but it doesn’t know why

The company says further research is needed.

Sponsored Links

Karissa Bell
October 21st, 2021
In this article: news, gear, twitter, algorithm, research
In this photo illustration a Twitter logo seen displayed on a smartphone screen with a computer wallpaper in the background in Athens, Greece on May 5, 2021. (Photo by Nikolas Kokovlis/NurPhoto via Getty Images)
NurPhoto via Getty Images

Twitter said in April that it was undertaking a new effort to study algorithmic fairness on its platform and whether its algorithms contribute to “unintentional harms.” As part of that work, the company promised to study the political leanings of its content recommendations. Now, the company has published its initial findings. According to Twitter’s research team, the company’s timeline algorithm amplifies content from the “political right” in six of the seven countries it studied.

The research looked at two issues: whether the algorithmic timeline amplified political content from elected officials, and whether some political groups received a greater amount of amplification. The researchers used tweets from news outlets and elected officials in seven countries (Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States) to conduct the analysis, which they said was the first of its kind for Twitter.

“Tweets about political content from elected officials, regardless of party or whether the party is in power, do see algorithmic amplification when compared to political content on the reverse chronological timeline,” Twitter’s Rumman Chowdhury wrote about the research. “In 6 out of 7 countries, Tweets posted by political right elected officials are algorithmically amplified more than the political left. Right-leaning news outlets (defined by 3rd parties), see greater amplification compared to left-leaning.”

Crucially, as Chowdhury points out to Protocol, it’s not yet clear why this is happening. In the paper, the researchers posit that the difference in amplification could be a result of political parties pursuing “different strategies on Twitter.” But the team said that more research would be needed to fully understand the cause.

While the findings are likely to raise some eyebrows, Chowdhury also notes that “algorithmic amplification is not problematic by default.” The researchers further point out that their findings “does not support the hypothesis that algorithmic personalization amplifies extreme ideologies more than mainstream political voices.”

But at the very least, the research would seem to further debunk the notion that Twitter is biased against conservatives. The research also offers an intriguing look at how a tech platform can study the unintentional effects of its algorithms. Facebook, which has come under pressure to make more of its own research public, has defended its algorithms even as a whistleblower has suggested the company should move back to a chronological timeline.

Twitter’s research is part of a broader effort by Twitter to uncover bias and other issues in its algorithms. The company has also published research about its image cropping algorithm and started a bug bounty program to find bias in its platform.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Popular on Engadget