AtheMathmo / rulinalg

A linear algebra library written in Rust
https://crates.io/crates/rulinalg
MIT License
288 stars 58 forks source link

Adapt matrix factorizations from nalgebra? #191

Open regexident opened 7 years ago

regexident commented 7 years ago

With the latest release (v0.13) nalgebra now supports Rust-native implementations of the following matrix factorizations:

In particular, this release includes pure-rust implementations of the following factorizations for real matrices ("general matrices" designates real-valued matrices that may be rectangular):

  • Cholesky decomposition of symmetric definite-positive matrices (+ inverse, square linear system resolution).
  • Hessenberg decomposition of square matrices.
  • LU decompostion of general matrices with partial pivoting (+ inversion, determinant, square linear system resolution).
  • LU decompostion of general matrices with full pivoting (+ inversion, determinant, square linear system resolution).
  • QR decomposition of general matrices (+ inverse, square linear system resolution).
  • Real Schur decomposition of general matrices (+ eigenvalues, complex eigenvalues).
  • Eigendecomposition of symmetric matrices.
  • Singular Value Decomposition (SVD) of general matrices (+ pseudo-inverse, linear system resolution, rank).

This made me wonder if instead of reinventing the wheel for rulinalg, maybe rulinalg and nalgebra should join or make use of the other side's efforts with these operations?

cc @sebcrozet

Related issues (varying degrees), but not limited to:

Andlon commented 7 years ago

That's certainly a good question!

I'm really happy to see more work on pure rust linear algebra algorithms. Rulinalg has been rather dormant for some time now, so it's good to see things are happening in the ecosystem.

As for making use of each other's effort, and assuming the continued co-existence of the two libraries, I'm sure we can learn a lot from each other. Beyond that, did you have anything particular in mind?

c410-f3r commented 7 years ago

Both projects could benefit from each other, like GCC and LLVM do, but differents projects have different objectives and differents leaderships. As for myself, I think optimized linear algebra is so huge and complex that a better, faster, stronger, and unified project would be a awesome standard reference for the Rust ecosystem.

regexident commented 7 years ago

Beyond that, did you have anything particular in mind?

I basically stumbled upon the announcement and thought "wait a second, lots of these have open issues on rulinalg, maybe there is a chance for symbiosis here". :wink:

AtheMathmo commented 7 years ago

I have only had time to loosely follow what has been happening with nalgebra but I agree that it is good to see things moving in the ecosystem.

Unfortunately I just haven't found the time to pick up my own slack and get things moving with rulinalg again. I would be more than happy to see if there is a way we can work together towards some greater good. I am also a little unsure about exactly how this relationship would work - especially given the lack of activity on my end. But I'm very open to any ideas about how we could make a meaningful proposal.

regexident commented 7 years ago

FYI:

Implementing matrix decompositions (Cholesky, LQ, sym eigen)
as differentiable operators: https://arxiv.org/abs/1710.08717

Abstract:

Development systems for deep learning, such as Theano, Torch, TensorFlow, or MXNet, are easy-to-use tools for creating complex neural network models. Since gradient computations are automatically baked in, and execution is mapped to high performance hardware, these models can be trained end-to-end on large amounts of data. However, it is currently not easy to implement many basic machine learning primitives in these systems (such as Gaussian processes, least squares estimation, principal components analysis, Kalman smoothing), mainly because they lack efficient support of linear algebra primitives as differentiable operators. We detail how a number of matrix decompositions (Cholesky, LQ, symmetric eigen) can be implemented as differentiable operators. We have implemented these primitives in MXNet, running on CPU and GPU in single and double precision. We sketch use cases of these new operators, learning Gaussian process and Bayesian linear regression models. Our implementation is based on BLAS/LAPACK APIs, for which highly tuned implementations are available on all major CPUs and GPUs.