Google has made the app open-source so any developer can adopt it. It can perform chores like object detection, face attribute recognition, fine-grained classification (recognizing a dog-breed, for instance) and landmark recognition. The tech is part of TensorFlow, Google's deep learning model that recently shrunk down to mobile size in a new version called TensorFlow Lite.
MobileNets is not one-size-fits-all, as Google has actually built 16 pre-trained models "for use in mobile projects of all sizes." The larger the model, the better it is at recognizing landmarks, faces or doggos, with the most CPU-intensive ones hitting scores of between 70.7 and 89.5 percent accuracy. Those aren't far from Google's cloud-based AI, which can recognize and caption objects with around 94 percent accuracy, last we checked.
With different pre-trained models at their disposal, developers can pick one that best suits the memory and processing requirements for an app. To integrate the new models, developers need to use TensorFlow Mobile, a system designed to ease deployment of AI apps on iOS and Android.
From a consumer standpoint, you'll likely start to see apps that can do basic image identification and other useful functions, with more speed, less data use and better privacy. An example of that could be Google's new Lens product, which can pick out landmarks, products and faces using a combination of smartphone and cloud processing. The tech probably won't hit its stride, though, until we see new chips that support it -- and both Google and Apple are already working on that.