Home // CLOUD COMPUTING 2021, The Twelfth International Conference on Cloud Computing, GRIDs, and Virtualization // View article


Comparison of Benchmarks for Machine Learning Cloud Infrastructures

Authors:
Manav Madan
Christoph Reich

Keywords: Machine Learning, Machine Learning Benchmark, MLPerf, AIBench, Deep learning, Survey

Abstract:
Training of neural networks requires often high computational power and large memory on Graphics Processing Unit (GPU) hardware. Many cloud providers such as Amazon, Azure, Google, Siemens, etc, provide such infrastructure. However, should one choose a cloud infrastructure or an on premise system for a neural network application, how can these systems be compared with one another? This paper investigates seven prominent Machine Learning benchmarks, which are MLPerf, DAWNBench, DeepBench, DLBS, TBD, AIBench, and ADABench. The recent popularity and widespread use of Deep Learning in various applications have created a need for benchmarking in this field. This paper shows that these application domains need slightly different resources and argue that there is no standard benchmark suite available that addresses these different application needs. We compare these benchmarks and summarize benchmark related datasets, domains, and metrics. Finally, a concept of an ideal benchmark is sketched.

Pages: 41 to 47

Copyright: Copyright (c) IARIA, 2021

Publication date: April 18, 2021

Published in: conference

ISSN: 2308-4294

ISBN: 978-1-61208-845-7

Location: Porto, Portugal

Dates: from April 18, 2021 to April 22, 2021