NVIDIA found a way to train AI with very little data
The breakthrough with training generative adversarial networks could open up AI to more fields.
NVIDIA has developed a new approach for training generative adversarial networks (GAN) that could one day make them suitable for a greater variety of tasks. Before getting into NVIDIA’s work, it helps to know a bit about how GANs work. Every GAN consists of two competing neural networks: a generator and a discriminator.
In one where the goal of the algorithm is to create new images, the latter is what examines thousands of sample images. It then uses that data to “coach” its counterpart. In order to create consistently believable results, traditional GANs need somewhere in the range of 50,000 to 100,000 training images. With too few, they tend to run into a problem called overfitting. In those instances, the discriminator doesn’t have enough of a base to effectively coach the generator.
In the past, one way AI researchers have tried to get around this problem is to use an approach called data augmentation. Using an image algorithm as an example again, in instances where there isn’t a lot of material to work with, they would try to get around that problem by creating “distorted” copies of what is available. Distorting, in this case, could mean cropping an image, rotating it or flipping it. The idea here is that the network never sees the same exact same image twice.
The problem with that approach is that it would lead to a situation in which the GAN would learn to mimic those distortions, instead of creating something new. NVIDIA’s new adaptive discriminator augmentation (ADA) approach still uses data augmentation but does so adaptively. Instead of distorting images throughout the entire training process, it does selectively and just enough so that the GAN avoids overfitting.
The potential outcome of NVIDIA’s approach is more meaningful than you might think. Training an AI to write a new text-based adventure game is easy because there’s so much material for the algorithm to work with. The same is not true for a lot of other tasks researchers could turn to GANs for help. For example, training an algorithm to spot a rare neurological brain disorder is difficult precisely because of its rarity. However, a GAN trained with NVIDIA’s ADA approach could get around that problem. As an added bonus, doctors and researchers could share their findings more easily since they’re working from a base of images created by an AI, not patients in the real world. NVIDIA will share more information about its new ADA approach at the upcoming NeurIPS conference, which starts on December 6th.