Latest in Culture

Image credit: Georgia Tech

AIs get a crash course in humanity by interpreting stories

Teaching machines ethical behavior, one crowdsourced story at a time.
Georgia Tech

As it turns out, the key to crafting intelligent machines that won't go rogue and slaughter us all might be some very thoughtful storytelling. Mark Riedl and Brent Harrison from Georgia Tech are trying to mold the way artificial intelligences wrap their incorporeal heads around human ethics by feeding them stories, and rewarding them for sticking to an ethically sound path.

The project is a sequel and partner of sorts to Scheherezade, an earlier project of Riedl's that saw a program piece together stories with logically sound plot points and developments from crowdsourced submissions. This time, Riedl and Harrison used Scheherezade to map out the structure of a story's plot elements and figure out the "most reliable" path. From there, Quixote turns that "plot graph" into a tree of nodes (in this case, plot points) connected by transitioning events, and either rewards or punishes the artificial agent based on how well it sticks to that pattern of events.

It's a fascinating turn, but maybe not the most surprising one — human kids can pick up tips about creative problem solving from Rapunzel and the Ant, and the Grasshopper reinforces the importance of not being a procrastinating schmuck. (Of course, there are some classic stories with less-than-sterling lessons too). Riedl's and Harrison's work might not be applicable to every robot we'll ever build, but hey — they admit it's pretty nicely suited to so-called artificial agents that "have a limited range of purposes but need to interact with humans to achieve their goals." By steeping AIs in stories that align with certain cultural values, they just might learn to figure out right from wrong (and without murderous consequences to boot).

From around the web

ear iconeye icontext filevr