Latest in Tomorrow

Image credit: monsitj via Getty Images

NVIDIA found a way to train AI with very little data

The breakthrough with training generative adversarial networks could open up AI to more fields.
Igor Bonifacic, @igorbonifacic
December 7, 2020
470 Shares
Share
Tweet
Share

Sponsored Links

science formula and math equation abstract background. concept of machine learning and artificial 
intelligence.
monsitj via Getty Images

NVIDIA has developed a new approach for training generative adversarial networks (GAN) that could one day make them suitable for a greater variety of tasks. Before getting into NVIDIA’s work, it helps to know a bit about how GANs work. Every GAN consists of two competing neural networks: a generator and a discriminator. 

In one where the goal of the algorithm is to create new images, the latter is what examines thousands of sample images. It then uses that data to “coach” its counterpart. In order to create consistently believable results, traditional GANs need somewhere in the range of 50,000 to 100,000 training images. With too few, they tend to run into a problem called overfitting. In those instances, the discriminator doesn’t have enough of a base to effectively coach the generator.     

In the past, one way AI researchers have tried to get around this problem is to use an approach called data augmentation. Using an image algorithm as an example again, in instances where there isn’t a lot of material to work with, they would try to get around that problem by creating “distorted” copies of what is available. Distorting, in this case, could mean cropping an image, rotating it or flipping it. The idea here is that the network never sees the same exact same image twice. 

The problem with that approach is that it would lead to a situation in which the GAN would learn to mimic those distortions, instead of creating something new. NVIDIA’s new adaptive discriminator augmentation (ADA) approach still uses data augmentation but does so adaptively. Instead of distorting images throughout the entire training process, it does selectively and just enough so that the GAN avoids overfitting.   

The potential outcome of NVIDIA’s approach is more meaningful than you might think. Training an AI to write a new text-based adventure game is easy because there’s so much material for the algorithm to work with. The same is not true for a lot of other tasks researchers could turn to GANs for help. For example, training an algorithm to spot a rare neurological brain disorder is difficult precisely because of its rarity. However, a GAN trained with NVIDIA’s ADA approach could get around that problem. As an added bonus, doctors and researchers could share their findings more easily since they’re working from a base of images created by an AI, not patients in the real world. NVIDIA will share more information about its new ADA approach at the upcoming NeurIPS conference, which starts on December 6th.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Comment
Comments
Share
470 Shares
Share
Tweet
Share

Popular on Engadget

Presenting the Best of CES 2021 winners!

Presenting the Best of CES 2021 winners!

View
Donald Trump pardons ex-Waymo, Uber engineer Anthony Levandowski

Donald Trump pardons ex-Waymo, Uber engineer Anthony Levandowski

View
LG considers leaving the mobile business

LG considers leaving the mobile business

View
Mercedes-Benz' EQA crossover is its first sub-$50,000 EV

Mercedes-Benz' EQA crossover is its first sub-$50,000 EV

View
Korg teases Drumlogue, a hybrid analog / digital groovebox

Korg teases Drumlogue, a hybrid analog / digital groovebox

View

From around the web

Page 1Page 1ear iconeye iconFill 23text filevr