Stanford built a '4D' camera for cars, robots and VR

It can also capture 140 degrees of info in one image.

A team of Stanford scientists have created what could be the perfect "eye" for autonomous vehicles and delivery drones thus far. It's a 4D camera that can capture nearly 140 degrees of information, allowing it to gather more information than conventional cameras in a single image. The researchers call their design the "first-ever single-lens, wide field of view, light field camera." It relies on light field photography for the additional info to make its results four dimensional. That means it can observe and record the direction and distance of the light hitting the lens and bundle it with the resulting 2D image.

As a result, the team's robot eye has the ability to refocus images after they're taken, which is light field photography's most popular feature. Remember Lytro? That small device can adjust the focus of an image, because it also uses light field imaging tech. The researchers compare the difference between looking through a normal camera and the one they designed to the difference between looking through a peephole and a window:

"A 2D photo is like a peephole because you can't move your head around to gain more information about depth, translucency or light scattering'. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess."

Assistant prof. Gordon Wetzstein and postdoctoral scholar Donald Dansereau with a prototype of the monocentric camera that captured the first single-lens panoramic light fields.

In the future, various types of robots and machines can take advantage of the camera's capabilities. A rugged robot can use its light field features to refocus images as it makes its way through the rain. It can improve close-up images for search-and-rescue robots or self-driving cars while navigating small areas. The camera could also be used to capture images for augmented and virtual reality, since all the info it includes in one picture could lead to more seamless renderings.

At the moment, the device is still in its proof-of-concept stage and is a bit too big for actual use. The researchers are aiming to develop a smaller and lighter version that they can test on a robot, but for now, you can see some of its sample snapshots in the video below: