rust-ndarray / ndarray-linalg

Linear algebra package for rust-ndarray using LAPACK binding
Other
371 stars 69 forks source link

Add `expm` (exponential matrix) implementation #156

Open elsuizo opened 5 years ago

elsuizo commented 5 years ago

https://en.wikipedia.org/wiki/Matrix_exponential

termoshtt commented 5 years ago

SciPy has three functions

and expm2 and expm3 are deprecated. We should use Pade approximation.

Links

elsuizo commented 5 years ago

The Julia language implementation is the following: https://github.com/JuliaLang/julia/blob/master/stdlib/LinearAlgebra/src/dense.jl#L530-L611

SuperFluffy commented 4 years ago

You can try https://github.com/SuperFluffy/rust-expm.

I have to be honest: I implemented it more for fun and never used it in production. So YMMV.

bluss commented 2 years ago

@ethanhs see also this issue

matthagan15 commented 2 years ago

Assuming this issue is still open I'd love to take a stab at it.

naren-nallapareddy commented 2 years ago

@matthagan15 If you are working on the code, I will write tests, debug or help in any way I can (FYI new to Rust).

matthagan15 commented 2 years ago

@naren-nallapareddy I have yet to get around to this, I'm still struggling to understand how the degree of the pade approximation is chosen & how the denominator is implemented (I believe you just compute the inverse of the polynomial of the matrix but a full inverse seems like a lot so I'm not sure).

matthagan15 commented 1 year ago

status update: I have a bare-bones implementation of the norm estimation needed that seems to be working with preliminary testing (around 90% accuracy w/ exact values) and I have some expm approximations that seem to working with a single 3x3 test matrix. Some things still needed to be done:

matthagan15 commented 1 year ago

After a busy few days, it seems that expm is working for 2x2 complex rotation matrices! AKA picking a random angle theta, we know that Exp[i theta pauli_x] = cos[theta] I + i Sin[theta] pauli_x, so I tested this over random values of theta. The higher order approximations give results within a factor of 2 or so of machine precision (f64::EPSILON), but the lower orders can have much higher errors. This seems to indicate something is wonky with the norm checkings, but even then the worst error I've seen was on the order of 10^-13 or so which is good but just not as accurate as it should be. Next steps:

After I have it in a presentable state I'll try to break it down into a few chunks (total changes are currently sitting at around 1100 lines of code) and send some pull requests!

matthagan15 commented 1 year ago

so I've simplified the code a fair bit, but unfortunately I've found a bug which is preventing me from attempting to send it out. I'm not sure what the bug is, but there seems to be an error introduced that scales with the dimension of the input matrix, at around 200x200 matrices each entry has an error on the order of 966 x f64::EPSILON, which is about 10^-12, which leads to a one norm error of at least 2 x 10^-10. This may not seem significant, but the per entry error is around 20 times worse than scipy's implementation, and further scipy's per entry error does not increase with the dimension. I used this in a monte carlo numerical test and I found that under a critical dimension of about 50 or so (when the error is comparable to scipys) the numerics worked beautifully but after dimension of around 150 I noticed my output became incredibly noisy, indicating the errors accumulated to my working precision.

I'm currently trying to determine what could be causing this, it is not a bug with my pade approximations as it seems to be occuring at all approximation levels (meaning it doesn't seem to be a dumb code mistake). Further because the earlier approximations do not use any scaling + squaring it can't be an issue with that. I've tested the inversions used and the inverses are working as expected. It may be an issue with one of the subroutines, but in order to unit test it I have to check it against the scipy implementations. Hopefully will find this soon!

matthagan15 commented 1 year ago

did some tweaks, I believe the error mentioned previously is not an algorithmic implementation issue as it does not appear with sparse matrices. There are still some TODO's which I personally have: smartly switch from exact 1-norm calculation to norm estimation (which I also have implemented on my fork) for large matrices, which is what scipy does. The pull request is located here https://github.com/rust-ndarray/ndarray-linalg/pull/352 for anyone curious.