Facebook's new mobile AI can process video in real time

It's the first result of a new push for neural networks on phones.

Facebook has started rolling out its "Caffe2Go" AI platform that does advanced style transfer video effects in real time using only your iOS or Android smartphone's horsepower. While the painterly effects are cool (see the video, below), the tech behind it is much more interesting. Deep learning normally requires content "be sent off to data centers for processing on big-compute servers," Facebook wrote, but with Caffe2Go, the processing can be done "in the palm of your hand."

The new platform is part of a larger AI effort that includes the machine-vision Lumos app used to suss out images that violate its community standards. It has also open-sourced similar tech on Github to non-Facebook developers. It's not the only company doing AI projects, of course. Google released its Tensorflow framework to the open source community and Microsoft recently made its Cognitive Toolkit available to developers.

Facebook first flaunted Caffe2Go last month, then brought some of the effects to a new camera in a limited European release. Much like the Prisma app, it transfers styles from Van Gogh or Monet onto any still or moving image. Processing live video normally requires at least a well-equipped PC, but Facebook says "we were able to provide AI [processing] on some mobile phones at less than 1/20th of a second," six times faster than an eye blink.

Company engineers had to design software that worked with your smartphone's limitations on memory and computing power. At the same time, they wanted the app to scale up for use on servers or workstation class machines. To that end, the team created a lightweight UNIX-based system 100 times smaller than similar deep-learning programs that works on CPU, GPU, Android and iOS. They then created add-in modules, including a CPU feature called NEON that improves mobile processing speeds.

Facebook is giving developers access to Caffe2Go via its stack, and plans to open-source parts of it "over the coming months." While the video style transfer is a good test of the technology, it's capable of doing other AI processing tasks related to image, speech and more. It won't result in WestWorld any time soon, but it should open up the possibilities of what you can do on your smartphone in the near future.