Download PDFOpen PDF in browserEffective Machine Learning Based Format Selection and Performance Modeling for SpMV on GPUsEasyChair Preprint 38811 pages•Date: August 1, 2018AbstractSparse Matrix-Vector multiplication (SpMV) is a key kernel for many applications in computational science and data analytics. Several efforts have addressed the optimization of SpMV on GPUs, and a number of compact sparse-matrix representations have been considered for it. It has been observed that the sparsity pattern of non-zero elements in a sparse matrix has a significant impact on achieved SpMV performance. Further, no single sparse-matrix format is consistently the best across the range of sparse matrices encountered in practice. In this paper, we perform a comprehensive study that explores the use of Machine Learning to answer two questions: 1) Given an unseen sparse matrix, can we effectively predict the best format for it for SpMV on GPUs? 2) Can SpMV execution time for that matrix be predicted, for different matrix formats? By identifying a small set of sparse matrix features to use in training the ML models, we demonstrate that efficient prediction of best format with ≈ 88% accuracy can be achieved when selecting between six well known sparse-matrix formats on two GPU architectures (NVIDIA Pascal and Kepler), and ≈ 10% relative mean error (RME) with execution time prediction. The achieved accuracy and/or computational efficiency are better than previously reported approaches in the literature. Keyphrases: Decision Tree, GPU, Multilayer Perceptron (MLP), Sparse Matrix-Vector multiplication (SpMV), Sparse matrix format selection, Support Vector Machine (SVM), XGBoost
|