Advertisement

AI transforms 'The Great British Bakeoff' into a horror show

Artificial intelligence (AI) can do astonishing things when given specific jobs, but it's terrible at understanding context -- something we've seen before in this series. Thanks to a new experiment inspired by The Great British Bakeoff (GBBO), we can again witness the tragedy of AI stepping outside its lane. Researcher Janelle Shane trained NVIDIA's StyleGan 2 system on images of the show's bakers, pastries and tents, along with "random squirrels," and the results were decidedly not charming and sweet.

Rather, they're a mashup of bodies, bread and faces twisted grotesquely together and set in a nightmare tent. How did a system so great at generating realistic fake faces go so spectacularly wrong in this scenario? According to Shane's article, it's a vivid (and hilarious) demonstration of what you can and can't do with current deep learning technology.

It wasn't a lack of data, since Shane trained the system using 55,000 images from the GBBO. However, the problems started when she introduced faces that were unlike the ones it learned on. Rather than being centered like the StyleGan 2 training set, the TV show faces were at random sizes and positions in the images. Also, the system is only good at working on one thing at a time (faces for instance) and not other types of objects at the same time.

So, rather than creating new faces, the system first erased them completely, leaving Eyes Without a Face-looking people caught in a baking hell. Further training didn't help much, either. "This is the usual outcome when you train a neural network for a long time -- not an acceleration of progress but a gradual stagnation," Shane wrote. "The baking show images were too varied for the neural net, and that's why its progress stopped, even with lots of training data."

AI weirdness great british bakeoff

What's more, neural nets are great at patterns, so the system filled in gaps by repeating elements borrowed from other images, as shown above. "Even where the neural net ill-advisedly decides to fill the entire tent interior with bread (or possibly with fingers; it's sometimes unsettlingly hard to tell), you can see that the patterns in the bread repeat," Shane said.

That applies to the top image, which used multiple patterns everywhere. "Human faces and bodies, on the other hand, aren't made of repeating patterns, no matter how much the neural net may want them that way," wrote Shane. The system also mashed together repeating textures to create baked goods nobody would want to eat. "Would you like voidcake, floating dough, or terror blueberry?" she asked.

We've seen these themes before in other scenarios like self-driving or debating, where AI can grind out certain tasks but fail at things humans do with ease. "It's a really vivid illustration of how much today's AI struggles when a problem is too broad," Shane told Engadget. "So many of the AI mistakes in my blog and my book turn out to be because the AI was asked to do too much."

As she notes, you can try it yourself using cat pictures and AI training software like Runway ML -- as long as you're prepared to transform Ms. Mittens into something out of Pet Sematary.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.