Advertisement
Engadget
Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

Twitter's AI bounty program reveals bias toward young, pretty white people

There are biases against the elderly and Arabic writing, too.

Rafael Henrique/SOPA Images/LightRocket via Getty Images

Twitter's first bounty program for AI bias has wrapped up, and there are already some glaring issues the company wants to address. CNET reports that grad student Bogdan Kulynych has discovered that photo beauty filters skew the Twitter saliency (importance) algorithm's scoring system in favor of slimmer, younger and lighter-skinned (or warmer-toned) people. The findings show that algorithms can "amplify real-world biases" and conventional beauty expectations, Twitter said.

This wasn't the only issue. Halt AI learned that Twitter's saliency algorithm "perpetuated marginalization" by cropping out the elderly and people with disabilities. Researcher Roya Pakzad, meanwhile, found that the saliency algorithm prefers cropping Latin writing over Arabic. Another researcher spotted a bias toward light-skinned emojis, while an anonymous contributor found that almost-invisible pixels could manipulate the algorithm's preferences

Twitter has published the code for winning entries.

The company didn't say how soon it might address algorithmic bias. However, this comes as part of a mounting backlash to beauty filters over their tendency to create or reinforce unrealistic standards. Google, for instance, turned off automatic selfie retouching on Pixel phones and stopped referring to the processes as beauty filters. It wouldn't be surprising if Twitter's algorithm took a more neutral stance on content in the near future.