Efficient parallel implementation of kernel methods Articles uri icon

publication date

  • May 2016

start page

  • 175

end page

  • 186

volume

  • 191

international standard serial number (ISSN)

  • 0925-2312

electronic international standard serial number (EISSN)

  • 1872-8286

abstract

  • The availability of multi-core processors has motivated an increasing interest in research lines about parallelization of machine learning algorithms. Kernel methods such as Support Vector Machines (SVMs) or Gaussian Processes (GPs), in spite of their efficacy solving problems of classification and regression, have a very high computational cost and usually produce very large models. In this paper we present parallel algorithmic implementations of Semiparametric SVM (Parallel Semiparametric SVM, PS-SVM) and Gaussian Processes (Parallel full GP, P-GP and Parallel Semiparametric GP, PS-GP). We have implemented the proposed methods using OpenMP and benchmarked them against other state of the art methods, showing their good performance and advantages in both computation time and final model size. (C) 2016 Elsevier B.V. All rights reserved.

keywords

  • support vector machine; gaussian process; openmp; parallel; kernel methods; matrix inversion; support vector classifiers; machine; mapreduce; compact; svm