Open yorkerlin opened 9 years ago
Hi @yorkerlin. Apologies for such a long delay. It surely looks interesting. I'll try to read the paper and get back to this issue. Maybe @karlnapf will share some thoughts on this.
I think if we put linear solves /factorisations in the linalg package we will have a few very easy to use options:
-perform the solve on the GPU directly, there is some gains to be made: http://gamma.cs.unc.edu/LU-GPU/ -for sparse systems or conjugate gradient approaches, we can load the matrix to GPU and then the the matrix vector multiplications on the GPU. This gives a speedup too -There are also hybrid Cholesky factorization algorithms, that use the GPU. http://www.netlib.org/utk/people/JackDongarra/PAPERS/tile_magma_journal.pdf
I think the way to do this is incremental once again.
1.) should put all the inverse matrix stuff in linalg interface, with basic implementations 2.) Make use of it where we can in Shogun, in particular the GP part 3.) Add the backends that speed things up
I agree. Incremental approach is the way to go. Naive things should work first before fancy things. I am thinking of starting to work on adding factorizations/solvers.
Tagging related issues #2526 #2527.
@lambday @karlnapf It can be good for large scale GP. paper: http://arxiv.org/abs/1403.6015 code: https://github.com/sivaramambikasaran/HODLR