shogun-toolbox / shogun

Shōgun
http://shogun-toolbox.org
BSD 3-Clause "New" or "Revised" License
3.03k stars 1.04k forks source link

benchmark against GPU accelerated SVM libs #3160

Open yorkerlin opened 8 years ago

yorkerlin commented 8 years ago

@karlnapf http://fastml.com/running-things-on-a-gpu/

yorkerlin commented 8 years ago
On our rig, a GPU seems to be 20 times faster than a somewhat older CPU. 
There are already quite a few CUDA-capable machine learning toolkits, mainly for neural networks and SVM, and we think that more are coming. 
Here are a couple. Neural network libraries are mostly in Python and SVM packages in C/Matlab:

SVM packages:

    GPUMLib - a C++ library with NN, SVM and matrix factorization
    GTSVM - A GPU-Tailored Approach for Training Kernelized SVMs
    cuSVM - A CUDA Implementation of Support Vector Classification and Regression in C/Matlab
    GPUSVM - another CUDA SVM package
    GPU-LIBSVM - GPU-accelerated LIBSVM for Matlab

from http://fastml.com/running-things-on-a-gpu/

yorkerlin commented 8 years ago

@karlnapf Since one of Shogun's key features is Kernelized SVM, we have to improve our SVM algorithms. @lambday Your work is essential.

karlnapf commented 8 years ago

Might be worth to have a look at these GPU accelerated SVM solvers and see how we can adapt this to Shogun and which operations we need.

yorkerlin commented 8 years ago

@karlnapf some results see Table 1 (page 6) of http://arxiv.org/pdf/1404.1066.pdf

karlnapf commented 8 years ago

Uh, man these speedups almost hurt -- 200x ?? We should definitely look more into this thing.

vigsterkr commented 8 years ago

@karlnapf @yorkerlin hence the suggested gsoc project: KKT framework; where one could address most of these issues

yorkerlin commented 8 years ago

@karlnapf 200x in red likely means: 200x speed up in training phrase but poor accuracy in test phrase

jucor commented 7 years ago

Hi team! Sorry to necro this issue, but is there now GPU support for SVM in Shogun?

vigsterkr commented 7 years ago

@jucor our linalg finally supports it (just merged), but none of the svm's linalg part has been ported yet to be using the new linalg framework.... so not yet.

jucor commented 7 years ago

Thanks for the very fast answer @vigsterkr ! Impressive :) Good luck with the port.

karlnapf commented 7 years ago

For the record: It is not as simple as putting linalg calls in the SVM solver. The GPU SVM implementations are quite different from the CPU ones, and partly achieve the speedups with accuracy tradeoffs. This is a more serious task. Contributions are welcome!