Advertisement

Microsoft shows what it learned from its Tay AI's racist tirade

It's both an explanation and a mea culpa.

If it wasn't already clear that Microsoft learned a few hard lessons after its Tay AI went off the deep end with racist and sexist remarks, it is now. The folks in Redmond have posted reflections on the incident that shed a little more light on both what happened and what the company learned. Believe it or not, Microsoft did stress-test its youth-like code to make sure you had a "positive experience." However, it also admits that it wasn't prepared for what would happen when it exposed Tay to a wider audience. It made a "critical oversight" that didn't account for a dedicated group exploiting a vulnerability in Tay's behavior that would make her repeat all kinds of vile statements.

As for what's happening next? Microsoft is focused on fixing the immediate problem, of course, but it stresses that it'll need to "iterate" by testing with large groups, sometimes in public. That's partly an excuse for its recent behavior -- surely Microsoft would be aware that encouraging repetition is dangerous! However, Microsoft is right in that machine learning software can only succeed if it has enough data to learn from. Tay will only get better if she's subjected to the abuses of the internet, however embarrassing those may be to her creators.