Twitter will study ‘unintentional harms’ caused by its algorithms

The company will study its content recommendations and image cropping as part of the effort.

NurPhoto via Getty Images

Twitter announced a new plan to study the fairness of its algorithms. As part of the effort, which the company has dubbed the “Responsible Machine learning Initiative,” data scientists and engineers from across the company will study potential “unintentional harms” caused by its algorithms and make the findings public.

“We’re conducting in-depth analysis and studies to assess the existence of potential harms in the algorithms we use,” the company wrote in a blog post announcing the initiative.

To start, the company will study Twitter’s image cropping algorithm, which has been criticized as being biased toward people with lighter skin. Twitter will also study its content recommendations, including a “a fairness assessment of our Home timeline recommendations across racial subgroups,” and “an analysis of content recommendations for different political ideologies across seven countries.”

It’s not clear how much of an impact this initiative will have. Twitter notes that in some cases it may change aspects of its platform based on its findings, and other studies may simply result in “important discussions around the way we build and apply ML [machine learning].” But the issue is a timely one for Twitter and other social media platforms. Lawmakers have pressed Twitter, YouTube and Facebook for more transparency about their algorithms in the wake of the insurrection at the U.S Capitol, and some lawmakers have proposed legislation that would require companies to evaluate their algorithms for bias.

Twitter CEO Jack Dorsey has also spoken of his desire to create a marketplace for algorithms, which would allow users to control which algorithms they use. In its latest blog post, the company says it’s in the “early stages of exploring” such an idea