Don't worry, they only look like the Pokemon of your nightmares. The images you are about to see are, in fact, at the very bleeding edge of machine-generated imagery, mixed with collaborative human-AI production by artist Alex Reben and a little help from some anonymous Chinese artists.
Reben's latest work, dubbed AmalGAN, is derived from Google's BigGAN image-generation engine. Like other GANs (generative adversarial networks), BigGAN uses a pair of competing AI: one to randomly generate images, the other to grade said images based on how close they are to the training material. However, unlike previous iterations of image generators, BigGAN is backed by Google's mammoth computing power and uses that capability to create incredibly lifelike images.
But more important, it can also be leveraged to create psychedelic works of art, which is what Joel Simon has done with the GANbreeder app. This web-based program uses the BigGAN engine to combine separate images into mashups -- say, 40 percent beagle, 60 percent bookcase. What's more, it can take these generated images and combine (or "breed") them into second-generation "child" images. Repeating this breeding process results in bizarre, dreamlike pictures.
Reben's contribution is to take that GANbreeder process and automate as much of it as humanly possible. Per the AmalGAN site,
1. an AI combines different words together to generate an image of what it thinks those words look like
2. the AI then produces variants of those images by "breeding" it with other images, creating "child" images
3. another AI shows the artist several "child" images, measuring his brainwaves and body-signals to select which image he likes best
4. step 2 and 3 are repeated until the AI determines it has reached an optimal image
5. another AI increases the resolution of the image by filling in blanks with what it thinks should exist there
6. the result is sent to be painted on canvas by anonymous painters in a Chinese painting village
7. a final AI looks at the image, tries to figure out what is in it, and makes a title
The first two steps are handled by GANbreeder. "As far as I understand it, right now [GANbreeder mixes images] randomly," Reben told Engadget. "So it decides to either increase or decrease the percentages of the two images or add new models. You know, like 5 percent cow, and that'll be one of the images that it shows."
Once the system has conceived a sufficient selection of potential pictures, Reben pares down the collection using a separate AI trained to determine how much he likes a specific piece based on his physical reaction to it.
"I trained a deep learning system on the body sensors that I was wearing," Reben explains. "I had a program show me both good and bad art -- art that I liked, and art that I didn't like -- and I recorded the data." He then used that data to train a simple neural network to figure out the physiological differences between his reactions.
"Basically, it gives you that sort of dichotomous indication of what this art is [to me] from my brain waves and body signals," he continued. "It picks up on EEG; I also have heart rate and GSR. I think I might also add facial-emotion-recognition stuff through my webcam."
The selection process varies between image sets, Reben said. Sometimes the "right" picture would appear among the first presented by the AI; others required him to dig through multiple generations of child images to find one he liked.
Once he's selected the specific images he plans to include in the official project, Reben has the digital images oil painted onto canvas by anonymous Chinese artists. "The easiest 'why' is because I can't paint," Reben quipped. "Using anonymous Chinese painters is another link in this autonomous system, where my hand is not on the artworks -- just my brain and my eyeballs."
Transferring the works to a physical medium also helps sidestep an inherent shortcoming of the BigGAN system: The fact that the images are so resource-heavy to produce, they've yet to be created at a size larger than 512 x 512-pixel resolution. The anonymous artists are "basically using human brain power to upscale that image into a canvas," he said. "So that aspect of it is also interesting because there's gonna be a little bit of human interpretation."
Finally, Reben uses Microsoft's CaptionBot AI to create titles for each image. "I thought it was interesting removing more and more of a human from the process," Reben concluded. "I also like seeing what the AI interprets these as ... because it doesn't catch everything."
Gallery: Amalgan | 8 Photos
Gallery: Amalgan | 8 Photos
For now, the BigGAN engine doesn't have very many practical applications, and its research paper, which was published in September, is under review for a 2019 AI conference. The system itself has a bit of a counting problem, as evidenced by its continual insistence that clock faces have more than two hands and spiders have anywhere from four to 17 legs, but these idiosyncrasies could prove a boon to artists like Reben and Simon.
"One of the things Joel [Simon] is doing... is he would like to turn that tool that website into a tool for creative people," Reben said. Artists would be able to train the system on their own images, not just Google's stock set, experiment with the output levels, and "use it as a way to sort of spark imagination and creativity, which I think is great."
If you're interested in getting prints of any of these pieces, check out the Charles James Gallery.