shogun-toolbox / shogun

Shōgun
http://shogun-toolbox.org
BSD 3-Clause "New" or "Revised" License
3.03k stars 1.04k forks source link

GPU implementation for approximation of inverse kernel matrix #2898

Open yorkerlin opened 9 years ago

yorkerlin commented 9 years ago

@lambday @karlnapf It can be good for large scale GP. paper: http://arxiv.org/abs/1403.6015 code: https://github.com/sivaramambikasaran/HODLR

lambday commented 8 years ago

Hi @yorkerlin. Apologies for such a long delay. It surely looks interesting. I'll try to read the paper and get back to this issue. Maybe @karlnapf will share some thoughts on this.

karlnapf commented 8 years ago

I think if we put linear solves /factorisations in the linalg package we will have a few very easy to use options:

-perform the solve on the GPU directly, there is some gains to be made: http://gamma.cs.unc.edu/LU-GPU/ -for sparse systems or conjugate gradient approaches, we can load the matrix to GPU and then the the matrix vector multiplications on the GPU. This gives a speedup too -There are also hybrid Cholesky factorization algorithms, that use the GPU. http://www.netlib.org/utk/people/JackDongarra/PAPERS/tile_magma_journal.pdf

I think the way to do this is incremental once again.

1.) should put all the inverse matrix stuff in linalg interface, with basic implementations 2.) Make use of it where we can in Shogun, in particular the GP part 3.) Add the backends that speed things up

lambday commented 8 years ago

I agree. Incremental approach is the way to go. Naive things should work first before fancy things. I am thinking of starting to work on adding factorizations/solvers.

Tagging related issues #2526 #2527.