Open yorkerlin opened 8 years ago
On our rig, a GPU seems to be 20 times faster than a somewhat older CPU.
There are already quite a few CUDA-capable machine learning toolkits, mainly for neural networks and SVM, and we think that more are coming.
Here are a couple. Neural network libraries are mostly in Python and SVM packages in C/Matlab:
SVM packages:
GPUMLib - a C++ library with NN, SVM and matrix factorization
GTSVM - A GPU-Tailored Approach for Training Kernelized SVMs
cuSVM - A CUDA Implementation of Support Vector Classification and Regression in C/Matlab
GPUSVM - another CUDA SVM package
GPU-LIBSVM - GPU-accelerated LIBSVM for Matlab
@karlnapf Since one of Shogun's key features is Kernelized SVM, we have to improve our SVM algorithms. @lambday Your work is essential.
Might be worth to have a look at these GPU accelerated SVM solvers and see how we can adapt this to Shogun and which operations we need.
@karlnapf some results see Table 1 (page 6) of http://arxiv.org/pdf/1404.1066.pdf
Uh, man these speedups almost hurt -- 200x ?? We should definitely look more into this thing.
@karlnapf @yorkerlin hence the suggested gsoc project: KKT framework; where one could address most of these issues
@karlnapf
200x in red
likely means:
200x speed up in training phrase but poor accuracy in test phrase
Hi team! Sorry to necro this issue, but is there now GPU support for SVM in Shogun?
@jucor our linalg finally supports it (just merged), but none of the svm's linalg part has been ported yet to be using the new linalg framework.... so not yet.
Thanks for the very fast answer @vigsterkr ! Impressive :) Good luck with the port.
For the record: It is not as simple as putting linalg
calls in the SVM solver. The GPU SVM implementations are quite different from the CPU ones, and partly achieve the speedups with accuracy tradeoffs. This is a more serious task. Contributions are welcome!
@karlnapf http://fastml.com/running-things-on-a-gpu/