Advertisement

NVIDIA's NeRF AI instantly turns 2D photos into 3D objects

Instant NeRF only needs seconds to train and to produce results.

NVIDIA

A new technology called Neural Radiance Field or NeRF involves training AI algorithms to enable the creation of 3D objects from two-dimensional photos. NeRF has the capability to fill in the blanks, so to speak, by interpolating what the 2D photos didn't capture. It's a neat trick that could lead to advances in various fields, such as video games and autonomous driving. Now, NVIDIA has developed a new NeRF technique — the fastest one to date, the company claims — that only needs seconds to train and to generate a 3D scene.

It only takes seconds to train the model, called Instant NeRF, using dozens of still photos and the camera angles they were taken from. After that, it's capable of generating a 3D scene within just "tens of milliseconds." Like other NeRF techniques, it requires images taken from multiple positions. And for photos with multiple subjects, pictures taken without too much motion is preferred, otherwise the result would be blurry.

Check out Instant NeRF in action below:

NVIDIA explains that early NeRF models don't take too long to produce results either. It only takes them a few minutes to render a 3D scene, even if the subject in some of the images is obstructed by things, such as pillars and furniture. However, training them took hours. NVIDIA's version only takes seconds to train, because it relies on a technique the company developed called multi-resolution hash grid encoding that's optimized to run efficiently on its GPUs. It can even run on a single GPU, though it's fastest on cards with tensor cores that provide a performance boost for artificial intelligence.

The company believes that Instant NeRF could be used to train robots and to help autonomous driving systems understand the sizes and shapes of real-world objects. NVIDIA also sees a future for the technique in entertainment and architecture, where it can be used a way to generate 3D models of real environments that creators can modify during the planning process.