As a measure of performance, which is critical in HPC and AI, MLPerf has become the go-to benchmark. IDC‘s Peter Rutten said, “MLPerf is definitely going mainstream, and it will become an important benchmark.While not every company is putting out MLPerf results, my sense is that users are starting to want to see the MLPerf scores on the applications that are more important to them, and it may be gaining traction as a key factor in purchasing and grading new technologies.” It allows for different technologies to highlight their specific capabilities, rather than a broad benchmark that does not necessarily reflect capabilities on specific, key applications. Hyperion Research analyst Alex Norton said, “Overall, I think MLPerf is growing into a more widely used and accepted tool in the AI hardware/solution space, mainly due to the large number of categories and mini-apps that are available to the providers. The analyst community also seems to think MLPerf is carving out a durable, useful role. Now, the (slowly) growing number ML accelerators participating in MLPerf, their improving performance versus Nvidia, and MLPerf’s effort to regularly freshen its test suite are transforming MLPerf into a broader lens through which to assess different AI accelerators and systems including cloud instances. Nvidia GPUs were really the only accelerator game in town. Even when Nvidia was sweeping every category – driven by its formidable technical strength and lack of competitors – it was always necessary to dig into individual system submissions to make fair comparisons among them. Nvidia (and Google) had plenty to crow about – top performance in four categories – but the MLPerf training exercise is starting to have a very different feel – perhaps less dramatic but also more useful.īecause system configurations differ widely and because there are a variety of tests, there is no single MLPerf winner as in the Top500 list. Even more compelling - four of these record runs were conducted on the publicly available Google Cloud ML hub that we announced at Google I/O.” One Google win was in the research, not the available, category. Google’s TPU v4 ML supercomputers set performance records on five benchmarks, with an average speedup of 1.42x over the next fastest non-Google submission, and 1.5x vs our MLPerf 1.0 submission. Google got into the act, reporting, “Today’s release of MLPerf 2.0 results highlights the public availability of the most powerful and efficient ML infrastructure anywhere. We’re excited to see that 90 percent of the MLPerf submissions in this particular round use the Nvidia AI platform,” said Narasimhan. The 21 submitters used four different accelerators from Google to Graphcore to Habana and Nvidia. It’s a real increase in participation and speaks well for the benchmark as a whole. “We had 21 companies submit into the various MLPerf benchmarks in this particular round and they had over 260 submissions. All of the benchmarks remain the same as the previous round, with the exception of object detection lightweight, which has been improved. “Speech recognition and language processing models are key along with recommenders, reinforcement learning and computer vision. “MLPerf represent a diverse set of real world use cases,” said Shar Narasimhan, Nvidia director of product management for accelerated computing. The most notable absentee, perhaps, was Cerebras Systems which has consistently expressed disinterest in MLPerf participation. First-time MLPerf Training submitters were ASUSTeK, CASIA, H3C, HazyResearch, Krai, and MosaicML. The latest round of repeat submitters included Azure, Baidu, Dell, Fujitsu, GIGABYTE, Google, Graphcore, HPE, Inspur, Intel-Habana Labs, Lenovo, Nettrix, Nvidia, Samsung and Supermicro. This may be what early success looks like for MLPerf, the four-year-old AI technology benchmarking effort created by and run by MLCommons. Overall, the results of this round were about 1.8x better performance than the last round. The number of submitters grew to 21 from 14 in December. Google had four top scores basically splitting wins with Nvidia in the “available” category. Relative newcomers to the exercise – AI chip/system makers Graphcore and Habana Labs/Intel – along with Google again posted strong showings. Nvidia still dominates, but less so (no grand sweep of wins). MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |