Home // INFOCOMP 2018, The Eighth International Conference on Advanced Communications and Computation // View article
Authors:
Vojtech Nikl
Ondrej Vysocky
Lubomir Riha
Jan Zapletal
Keywords: Sparse matrices, Neural network, Support Vector Machine, Fast Decision Tree Learner, Weka, Energy efficiency, Prediction
Abstract:
Combinations of 3 hardware parameters (number of threads, core and uncore frequency) were tested for 4 sparse matrix algorithms (matrix-matrix addition, matrix-matrix multiplication and matrix-vector multiplication in 2 formats) on a set of over 2,000 matrices for the purpose of identifying the best energy-to-solution setting for each matrix and sparse matrix operation combination. On this set of data, the possibility of optimal hardware settings prediction based on the properties of each matrix were analysed using neural networks, Support Vector Machines and Fast Decision Tree Learners. All 3 classes of algorithms have been proven to be a very effective instrument in a lot of areas including prediction and classification. In neural networks, the input neurons represented properties of a given matrix, output neurons represented the optimal hardware parameters. Network properties (hidden neuron layers, neurons per layer, learning coefficient and training cycles) impact on the prediction accuracy were analysed and the results showed that a network with 30 hidden neurons produced results close to the best achievable. The prediction accuracy of all neural networks ranged from 20-95%, with roughly 70% being the average. Support Vector Machines were accurate in 60-65% of cases and Fast Decision Tree Learners provided the least accurate predictions, 50-55%.
Pages: 43 to 48
Copyright: Copyright (c) IARIA, 2018
Publication date: July 22, 2018
Published in: conference
ISSN: 2308-3484
ISBN: 978-1-61208-655-2
Location: Barcelona, Spain
Dates: from July 22, 2018 to July 26, 2018