Advertisement

MIT researchers teach autonomous cars how to deal with selfish drivers

New research has systems classify drivers as selfish or selfless.

Self-driving cars are already making their way onto the roads, but there are challenges in having computers share space with human drivers. AIs tend to assume that all humans act the same and behave in predictable and rational ways -- but anyone who's driven in busy traffic knows that's not the case.

New research from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) examines the problem of how a self-driving car can predict the behavior of other drivers on the road. This prediction requires a degree of social awareness which is difficult for machines, so the researchers took tools from social psychology to help the system classify driving behaviors into either selfish and selfless.

The system observed human driving behaviors and was then able to better predict the movements of other cars when it came to merging lanes or making unprotected left turns, with 25 percent greater accuracy than previously.

This kind of insight into human behavior is important for safety when autonomous and human drivers are sharing the road. An Uber self-driving car which struck and killed a pedestrian last year, for example, didn't have the ability to recognize jaywalkers.

"Working with and around humans means figuring out their intentions to better understand their behavior," said graduate student Wilko Schwarting, lead author on the new paper. "People's tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper we sought to understand if this was something we could actually quantify."

The research needs to be expanded before it can be implemented on real roads. The next step is for the team to apply their model to other road users like pedestrians, cyclists and other robotic systems.