shogun-toolbox / shogun

Shōgun
http://shogun-toolbox.org
BSD 3-Clause "New" or "Revised" License
3.03k stars 1.04k forks source link

Refactor EigenSolver #1930

Open vigsterkr opened 10 years ago

vigsterkr commented 10 years ago

As Eigen solvers are being used throughout Shogun (ICA, PCA, GaussiDistribution, JADiag etc.) we need to refactor a bit the shogun/mathematics/linalg/eigsolver/EigenSolver.h in order to be able to use the current EigenSolver in all of these classes.

This re-factorization would later allow to use GPU based eigen solvers, that are for example available in MAGMA, PLASMA or ViennaCL

karlnapf commented 10 years ago

First thing would be to generalise the current (sparse) solver interface which computes min/max eigenvalues to compute all eigenvalues. Then from that, we can have subclasses for iterative-sparse and dense solvers. Then we can put eigen3 implementations inside, or others like ViennaCL.

It would be great if one could change all of Shogun's dense Eigensolvers (which currently should all be eigen3 based) to a GPU based implementation in a one-liner with some sort of global flag.

karlnapf commented 10 years ago

@lambday should guide this one, as he wrote most of the code. Pls let him (and us know) if you work on this

karlnapf commented 10 years ago

Oh and one more thing. All those solvers should also have constructors that directly accept SGMatrix/SGSparseMatrix which are internally put into the CLinearOperator instances

vigsterkr commented 10 years ago

@lambday is there a particular reason you are storing the linear operator, i.e. m_linear_operator in EigenSolver? as personally i think it would be better just to have a simple constructor for the eigen solver and we would change the compute function to EigenSolver::compute(CLinearOperator<float64_t>* linear_operator). this way for example the eigen solver is reusable if you want to solve eigen problems for 2 different matrices. what's you opinion?

lambday commented 10 years ago

@vigsterkr I completely agree. We just needed to compute it once so it was okay but I certainly like the idea of reusability. The only place in log-determinant project eigen solvers are used is in CRationalApproximation::precompute() which stores the linear operator anyway in its base and we can pass it to the compute method of the eigen solver on the fly. Please also note that compute just computes extremal eigenvalues. I think adding another interface for compute_all() would be nice, keeping things inside compute unchanged - or may be rename it with compute_extremal().

vigsterkr commented 10 years ago

i've just ran magma's eigen solver test. example output: note that the GPU i'm using is kind of old and simple... but still very good results. the CPU version is a simple lapack eigensolver...

MAGMA 1.4.1 , compiled for CUDA capability >= 1.0
device 0: GeForce 320M, 950.0 MHz clock, 252.8 MB memory, capability 1.2
Usage: ./testing_dsyevd [options] [-h|--help]

    N   CPU Time (sec)   GPU Time (sec)
=======================================
 1088      0.75             2.19
 2112      5.52             2.16
 3136     18.08             2.26
 4160     43.85             0.03
 5184     80.08             0.05
 6208    136.36             0.05
 7232    221.71             0.10
 8256    333.74             0.08
 9280    421.67             0.10
karlnapf commented 10 years ago

This looks almost too good to be true ;)

We should aim to refactor all matrix factorisations (maybe even have a base class for that) and linear solvers in a way such that we can easily change the backend

lambday commented 10 years ago

Whoa! :+1: Just out of curiosity, does it work for sparse matrices? I mean I really wanna see how it performs with the ozone data. @vigsterkr If possible, could you please check that for one against the LanczosEigenSolver we have?

lambday commented 10 years ago

Ah, leave it! We only compute just two of 'em anyway! Won't make sense!

vigsterkr commented 10 years ago

@lambday there's GPU version of Lanczos solver both in MAGMA and in ViennaCL... we should introduce those as well, once we have this going nicely....

vigsterkr commented 10 years ago

@lambday should we really differentiate between compute_all(...) and compute_extremal(...) ?

lambday commented 10 years ago

@vigsterkr I think we should, if we're not planning to remove the current implementation of Lanczos eigensolver altogether. It relies on Lapack routine (dstemr I guess) for handling Lanczos T-matrix, which is called twice for min and max eigenvalues. Back then I wondered about the same and tried against computing all of 'em instead and had a quite significant performance difference. I guess there would be occasions in which one would only be interested in the extremal ones (log-det case, for example). As long as performance difference remains, I think we should keep both. @karlnapf what do you think?

vigsterkr commented 10 years ago

btw i've just seen that there's SGMatrix<T>::compute_eigenvectors(...), SGMatrix<T>::compute_few_eigenvectors(...) if LAPACK is available... shouldn't we get rid of these once the unified EigenSolver is ready?

@sonney2k according to git you've added those functions, what do you think?

vigsterkr commented 10 years ago

@lambday but this is not true in case of CDirectEigenSolver, that computes all the eigen values...right?

lambday commented 10 years ago

@vigsterkr yep, true that!

lambday commented 10 years ago

@vigsterkr iirc there was a discussion on removing these things from SGVector/SGMatrix completely and have a Eigen3 backend, may be having a Eigen factory class. That would have been nice.

vigsterkr commented 10 years ago

@lambday lanczos works with sparse matrices only or not?

lambday commented 10 years ago

@vigsterkr DirectEigenSolver was mainly meant for testing the framework in the beginning. We can easily get rid of that. The Lanczos one works for all types of LinearOperators - dense/sparse/others.

lambday commented 10 years ago

lol I wrote that before you typed :D I swear!

vigsterkr commented 10 years ago

@lambday lol... well it's ok to keep DirectEigenSolver solver as we not always want to use Lanczos solver.... maybe the name of the DirectEigenSolver should be changed into some more meaningful? btw: can you come around irc instead of discussing it here in the issue? :)))

vigsterkr commented 10 years ago

@lambday so here's my initial patch, but as you've mentioned on IRC maybe we should rework a bit the base EigenSolver class. should we have then maybe:

virtual SGVector<float64_t> compute_all(CLinearOperator<float64_t>* linear_operator) = 0;
/**
 * @param il low index of requested eigenpairs (1<=il<=n)
 * @param iu high index of requested eigenpairs (1<=il<=iu<=n)
 */
virtual SGVector<float64_t> compute_few(CLinearOperator<float64_t>* linear_operator, int il, int iu) = 0;

and maybe the corresponding functions for SGMatrix<float64_t> and SGSparseMatrix<float64_t>? and just do an SG_NOTIMPLEMENTED or invalid operation for eigen solvers that does not support sparse or dense matrices?

should we remove the storage of the eigenvalues/eigenvectors? or we want to have them stored in the class for reusing (although i dont really see the point).

Maybe we should return something else than a SGVector<float64_t> as we want to both return eigenvectors and eigenvalues... any ideas what would be the best data structure for that? or just follow what eigen (the library) does and store the eigenvalues/eigenvectors and with a function call they can be returned... and the object always stores the last eigen solution's eigenvector/values...

vigsterkr commented 10 years ago

and the other thing i've thought about how we should deal with the different backends: a) we create for each backend a separate class, like for example for DirectEigenSolver there would be DirectLapackEigenSolver, DirectEigenEigenSolver (ok i know this name is horrible..), DirectMagmaEigenSolver etc.

b) we have only one DirectEigenSolver and and in the header we create an enum based on the available backends and then the user can choose from which backend is being used. and then here with a shogun global parameter we can actually set (think about sg->parallel->num_threads()) globally which backend's eigensolver is being used. in case a) the same functionally can be only achieved via a eigen solver factory class as far as i can see now.

thoughts?

vigsterkr commented 10 years ago

and btw if we manage to do this with SVD as well then for example we can completely remove eigen dependency for PCA as it will be solved with whatever there is available...

karlnapf commented 10 years ago

I really like this! Agreed with all of the points so far

lambday commented 10 years ago

@vigsterkr @karlnapf I think storing last call's solution is a good idea. There may be cases when we need to use last solved result more than once. So keeping the compute methods void and adding getters for eigenvalues and eigenvectors from last compute call would do - then we don't need to think of fancy data structures for returning both at once :P I agree with the enum (which we should set with some default value - may be the best candidate) but hiding the API from backend is necessary. As a user I would absolutely love the global backend setters (makes it really flexible), but also it shouldn't bother me if all I care is to compute eigenvalue and I don't know delta about magma/eigen3/lapack :)

Oh and as I mentioned in IRC, these eigen solvers work for SPD matrices (as you can see, all eigenvalues we deal with here are real). So, we might wanna rethink regarding storage as well - make it generic.

lambday commented 10 years ago

compute_few() sounds really good :+1: compute_extremal would then just be a wrapper for ease of use

lambday commented 10 years ago

Btw, we should really keep this API ready for python modular to try new things out using directors, once the backend is solid :) :)

lambday commented 10 years ago

@vigsterkr @karlnapf @sonney2k @lisitsyn on another note, how do we like CRS for sparse matrices instead of LIL? leads to faster matrix vector multiplication [https://github.com/lambday/shogun/blob/develop/benchmarks/sparse_test.cpp], suitable for iterative linear/eigen solvers - also easy compatibility with eigen3.

karlnapf commented 10 years ago