MIT's 'Galileo' matches humans at predicting how things move

The system can look at a scene and predict how objects will collide and slide over each other.

Sponsored Links

MIT's 'Galileo' matches humans at predicting how things move

The human brain is able to quickly predict how objects will react in any given scene. When you drop a ball, for instance, you have some idea of how high it'll bounce based on its materials, size and the surface it's interacting with. Scientists are now trying to replicate this "intuitive physics engine" with technology and, in basic scenarios, are finding some success. Researchers at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) have developed "Galileo," which uses a combination of videos, 3D physics modelling and deep-learning algorithms to predict simple experiments "with an accuracy comparable to human subjects."

For starters, the team educated its system with 150 videos depicting numerous objects made from cardboard, foam and other materials. These gave Galileo a small database of doodads and their physical properties, allowing it to make some rudimentary hypotheses. Next, model information was added from Bullet, a physics engine used in video games such as Red Dead Redemption. These simulated each collision, creating velocity profiles and object positioning which acted as "a reality check" for Galileo. The deep-learning algorithms then helped the system to refine its guesses to the point where, looking at the first frame of a video, it could recognise all of the objects and determine how they would behave.

To assess its performance, the team created some visual challenges which were also performed by human test subjects. "The scenario(s) seem simple, but there are many different physical forces that make it difficult for a computer model to predict, from the objects' relative mass and elasticity to gravity and the friction between surface and object," Ilker Yildirim, a lead author on the team's resulting research paper said. "Where humans learn to make such judgments intuitively, we essentially had to teach the system each of these properties and how they impact each other collectively."

Surprisingly, the computer and its human challengers performed similarly, occasionally making optimistic estimates about how far an object would move. That's a success and testament to the team's underlying methodology, which sought to make an "intuitive physics engine" similar to our own -- mistakes and all. Maybe one day, a robot will be able to walk into a room and produce a Rube Goldberg machine like this one.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Popular on Engadget