A consortium of 40 tech companies, including Facebook and Google, has released a set of universal benchmarks for evaluating the performance of artificial intelligence tools, aiming to help businesses navigate the fast-growing field.
These benchmarks, named MLPerf Inference v0.5, are centered around three common machine learning tasks: image recognition, object detection, and voice translation.
These are meant to help companies compare various AI tools to see which works best for them as they pursue their own AI initiatives.
Peter Mattson, General Chairman of the consortium said
By creating common and relevant metrics to assess new machine learning software frameworks, hardware accelerators, and cloud and edge computing platforms in real-life situations, these benchmarks will establish a level playing field that even the smallest companies can use
Other than providing best practice guidance for companies in AI, these benchmarks are expected to help kickstart more innovation. This is because, despite the hype surrounding AI, companies have been slow in adapting to the new technology.
In a survey of 2473 organizations worldwide, 18% had AI models in production; 16% were in the proof-of-concept stage and 15% were experimenting with AI.
David Schubmehl, research director for AI systems at IDC, said benchmarks can help companies better address the complexities around AI adoption, allowing them to make apples-to-apples comparisons on the many AI software and hardware tools available.
“It’s coming at a useful time as we’re seeing more organizations move from experimentation to production,” he said.