A consortium of 40 tech companies, including the likes of Facebook and Google, have come together to release a set of evaluation benchmarks for AI. By measuring AI products against these benchmarks, companies in the field will be able to identify optimal product solutions and, according to the consortium, MLPerf, "take confidence" that they're deploying the right solutions.
The benchmarks, named MLPerf Inference v0.5, center around three common machine learning tasks: image classification, object detection and machine translation. Given the different processing abilities of different devices, there are separate benchmarks for AI across various platforms, such as smartphones, servers and chips.
As well as providing best practice guidance for companies in the AI field, it's hoped the benchmarks will help kick-start further innovation as, despite its hype, organizations have been slow to pick up the technology. In a statement, MLPerf's general chair Peter Mattson said, "By creating common and relevant metrics to assess new machine learning software frameworks, hardware accelerators, and cloud and edge computing platforms in real-life situations, these benchmarks will establish a level playing field that even the smallest companies can use."