Open JamesTev opened 3 years ago
@JamesTev
I need to be able to find eigenvalues on an asymmetric square matrix (as offered by numpy), although it seems that
ulab.linalg.eig
only works on symmetric square matrices at the moment. Is there a reason for this? I assume it's for computational reasons.
That was the simplest case to implement. To be honest, you are the first, who has asked a question about eig
, so I wasn't even sure, whether people use it at all, and there seemed to be no reason for investing too much time into something that is useless.
The snag is, each type of matrix has its own numerical difficulties, and if you look at serious numerical packages, different methods are used for the different types. In other words, there is no silver bullet.
Also, is implementation of SVD and/or QR factorisation in the pipeline at all?
No, but this doesn't mean that it can't be.
I see you use Givens rotations for
linalg.eig
so QR shouldn't be too bad.
Do you have a particular algorithm in mind, or would you like to take a stab at the implementation?
I see - that makes sense.
In other words, there is no silver bullet.
I'm not too familiar with your particular eig
implementation but do you think handling asymmetric matrices would be difficult? Perhaps Schur decomp (see here) could be an alternative? It works for arbitrary square matrices. I'm not too familiar with it but I believe it can be solved using QR factorisation which itself can be computed using Givens rotations.
Do you have a particular algorithm in mind, or would you like to take a stab at the implementation?
At the moment, I'm trying to compute CCA in real time on an ESP32 which requires SVD or being able to find eigen values. Unfortunately, my C skills aren't great so taking a stab at the implementation would take quite some time!
As an aside: I think having fast numpy-like linear algebra functions is really useful and convenient and far easier to use than some of the standard C/C++ implementations. Please keep up the good work.
I see - that makes sense.
In other words, there is no silver bullet.
I'm not too familiar with your particular
eig
implementation but do you think handling asymmetric matrices would be difficult? Perhaps Schur decomp (see here) could be an alternative? It works for arbitrary square matrices. I'm not too familiar with it but I believe it can be solved using QR factorisation which itself can be computed using Givens rotations.
I actually use Jacobi rotations, but that is off the point. I think QR would be relatively straightforward. Give me a couple of days. But that might not take you to the finish line.
I forgot to mention yesterday that one of the reasons for sticking with symmetric matrices was that their eigenvalues are real, and I didn't want to deal with complex numbers. (I have nothing against them, but they add too much to the firmware.)
At the moment, I'm trying to compute CCA in real time on an ESP32 which requires SVD or being able to find eigen values.
In that case, would an implementation of https://github.com/v923z/micropython-ulab/issues/188 help you? The only thing missing there is the addition of the keepdims
keyword to some of the numerical functions. I believe that is not terribly difficult. I have meant to fix that, but I am a bit tied down with another feature.
Unfortunately, my C skills aren't great so taking a stab at the implementation would take quite some time!
That's fine, but I might ask you to help with the testing.
As an aside: I think having fast numpy-like linear algebra functions is really useful and convenient and far easier to use than some of the standard C/C++ implementations. Please keep up the good work.
This project lives by the interest of the users.
@JamesTev Here is a new issue, you could comment on the specific questions there: https://github.com/v923z/micropython-ulab/issues/364
@JamesTev I have opened a draft under https://github.com/v923z/micropython-ulab/pull/366. With that, we should be able to insert a generic eigenvalue solver in linalg
. Could you, please, spell out once more, what exactly you need? Matrix manipulations in linear algebra are really a huge topic, and we shouldn't squander resources on an implementation that is of no interest to anybody.
@v923z Awesome, thank you! I agree that it's important to focus resources on the most useful functionality. To this end, a generic eigenvalue solver would be useful for countless algorithms and applications. My need (to perform CCA for EEG analysis) is just one. Although dimensionality reduction type problems are often fine because covariance matrices are symmetric, I've encountered several problems that produce asymmetric ones.
I'm going to chime in here saying that an SVD implementation would let me implement a generic pseudoinverse solver for elliptical fitting. I'm a PhD student at UW and currently my "SVD implementation" uses a rudimentary implementation based on power iteration, but a more robust and efficient approach such as one using the QR algorithm would be helpful.
Seeing that the QR decomposition was implemented by #427, I don't believe it would be too bad to implement it now. Unfortunately numerical algorithms just aren't my area and I'm mainly interested in the applications. If it would be possible to implement generic SVD, that would be great.
I can certainly try to help if need be with the SVD algorithm if time allows.
There are a couple of features in the pipeline, but I might be able to work on this in the next one or two weeks. When this issue came up, I looked into possible implementations of the SVD algorithm, and it's definitely doable with moderate effort. Once I have a reasonable prototype, I'd like to ask you to test it.
@v923z - I'd like to support you to get SVD support added for ulab along with everything necessary to implement a homography.
E.g. https://medium.com/analytics-vidhya/using-homography-for-pose-estimation-in-opencv-a7215f260fdd
While I can easily implement a method to do this in OpenMV thanks to AprilTag's matrix library in our code base I think it should be in ulab instead of image library directly.
Please let me know how I can help.
...
All the matrix math you need: https://github.com/openmv/apriltag/blob/master/common/matd.c#L980
And how to do a homography: https://github.com/openmv/apriltag/blob/master/common/homography.c
...
We use the homography_to_pose method to produce what we need given the camera matrix.
@kwagyeman SVD in ulab
would be a great addition, I completely agree. (You have to look no further than this feature request.) It seems to me that most of the work has already been done in matd.c, thanks for the pointer! I can add the required interface, that would be absolutely no problem.
As for the homography routine, do you want to add that ulab
? If so, can we place it in the utils
module, or something similar, given that it seems to have no corresponding numpy/scipy
function?
Also, would you be interested in supporting blob searching and similar? There is some code for that e.g., here: https://github.com/teuler/make-thermal-cam/blob/main/blob_ulab2_c_code/user.c, and I've been flirting with the idea for a long time. Having some image processing functionality could be useful beyond openmv, and I would be happy to add those functions, but I just don't know if there is interest.
I'm wouldn't add anything to ulab that needs performance. Pretty much anything for image processing needs to go into SIMD land if you really want to push the speed.
I'd keep it more high level and focused on doing mathematics for arrays.
Note, The AprilTag folks released an AprilTag 3 library that may have an updated version of that library with faster/better code.
As for the homography... I could add something to our image library to handle this. However, it may be more efficient to have in ulab since it needs the SVD code base. So, there would just be duplicates of the same code all over. Given that, adding the methods in their homography c file to a utils module that's generic for everyone to use would save code space I think in the long run.
If you could make it possible to call these methods from C still that would be great. Then I could add a method to our image lib if we needed to for some reason without having two copies of the same base library.
I'm wouldn't add anything to ulab that needs performance. Pretty much anything for image processing needs to go into SIMD land if you really want to push the speed.
I'd keep it more high level and focused on doing mathematics for arrays.
OK.
Note, The AprilTag folks released an AprilTag 3 library that may have an updated version of that library with faster/better code.
As far as I see, the last commit to the SVD routine was two years ago: https://github.com/AprilRobotics/apriltag/blob/master/common/svd22.c
As for the homography... I could add something to our image library to handle this. However, it may be more efficient to have in ulab since it needs the SVD code base. So, there would just be duplicates of the same code all over. Given that, adding the methods in their homography c file to a utils module that's generic for everyone to use would save code space I think in the long run. If you could make it possible to call these methods from C still that would be great. Then I could add a method to our image lib if we needed to for some reason without having two copies of the same base library.
One the one hand, I guess it would be enough, if we made the functions non-static, and then you can call them from everywhere. But on the other hand, if you want to implement another interface, that would definitely lead to code duplication, because the argument parsing etc. would be at two places. I might be missing the point, though.
Argument parsing would be different with a different interface as it would specific for a different situation.
Avoiding having two copies of the base library would be great.
Reference documentation for linalg.svd
: https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svd.html
@v923z I have two users asking about this. What would it take for this to get done quickly?
I'll have it by the weekend, if that's not too late.
*The SVD, that is.
Look, I'm working on it: https://github.com/v923z/micropython-ulab/tree/svd!
@v923z - That's awesome. Can I buy you a beer?
@kwagyeman
Can I buy you a beer?
I'm a teetotaller, so no. But thanks, anyway. However, what you could do is take a look at the code: I've adapted your implementation. At the end, there are three function calls to matd_op
:
It's not entirely clear to me what they should do, so you can either tell me what is to be implemented there, or you can insert the code. It seems to me that matd_op
has recursion in it, which I'd like to avoid, so if you want to do this via function calls, and not in place, then we should strip matd_op
to the bare minimum.
Also, there are commented-out calls to matd_op
, e.g., here https://github.com/v923z/micropython-ulab/blob/5084bd4aa5ade7b8d3b2d621e81d4578ca8310e2/code/scipy/linalg/linalg.c#L687-L695 where the actual code seems to be in-line. Why is that?
Matd_op performs matrix math using the passed string: https://github.com/openmv/apriltag/blob/master/common/matd.h#L355
So, you can implement what that does by using your dot() and transpose() code.
...
As for the code being implemented in line, that was probably for performance reasons. The comment is just showing what was there previously.
I think the code is complete now, but it would need checking and testing.
@v923z Awesome!
Can you port the homography stuff too into a utils module?
https://github.com/AprilRobotics/apriltag/blob/master/common/homography.h
(Note that HOMOGRAPHY_COMPUTE_FLAG_INVERSE fails often. HOMOGRAPHY_COMPUTE_FLAG_SVD works more reliably).
matd_t *H = homography_compute(correspondences, HOMOGRAPHY_COMPUTE_FLAG_INVERSE);
if (!H) { // try again...
H = homography_compute(correspondences, HOMOGRAPHY_COMPUTE_FLAG_SVD);
}
Also, I'd love to provide some financial compensation for doing this port. It really helps all our users having access to this type of code. Having homography exposed in utils will allow folks to do awesome stuff.
Can you port the homography stuff too into a utils module?
Yes, we could do that, but I actually wonder, whether we should just create a new module, image
, or something similar. My reasoning is that other functions (like the blob searching that I alluded to above) could also be added, and if people don't need these functions, then they simply exclude the whole module. That's a bit more intuitive and transparent than adding a bunch of unrelated methods to utils
.
Also, I'd love to provide some financial compensation for doing this port. It really helps all our users having access to this type of code. Having homography exposed in utils will allow folks to do awesome stuff.
We can talk about this on a private channel.
@EmDash00
@JamesTev
Emmy/James, do you think you could take a dab at writing a small test script? Results might be different to numpy
, given that the SVD is not unique.
@EmDash00 @JamesTev Emmy/James, do you think you could take a dab at writing a small test script? Results might be different to
numpy
, given that the SVD is not unique.
I can take a stab at it. How do I contribute?
I can take a stab at it. How do I contribute?
Many thanks!
There are a couple of pointers here: https://github.com/v923z/micropython-ulab?tab=readme-ov-file#testing. You can take pretty much any of the test scripts as your boilerplate, e.g., https://github.com/v923z/micropython-ulab/blob/master/tests/2d/numpy/bitwise_or.py.
Then you would just insert a matrix that is sort of meaningful, and try to verify that the results are correct. Once you're satisfied with the results, you can move your test script and the expected file to https://github.com/v923z/micropython-ulab/tree/master/tests/2d/scipy.
Sorry for lack of updates: I'm a PhD student and am currently working on other projects. This currently isn't my priority. I might get to it after the next month when I'm more free. I think I can definitely write them though.
@v923z I finally got around to running some preliminary tests. A couple things I noticed. You output a full diagonal matrix, which is unnecessary. The SciPy implementation of the SVD outputs the singular values as a 1D array. It works well enough with broadcasting and saves space, especially on a microcontroller. There is an option available in the SciPy implementation to output the full matrices using the parameter full_matrices
to the svd function. I think we should do the same thing for parity reasons.
Another consideration is singular value ordering. SciPy orders them largest to smallest, and I believe this is to make low-rank approximation and PCA easier since it amounts to slicing off the first few entries.
Either way though I haven't been able to get successful tests. This is my first time working on a collaborative project. I'm going to clean up my tests a bit, but how would you like me to make them available? Should I fork and PR?
@v923z I finally got around to running some preliminary tests. A couple things I noticed. You output a full diagonal matrix, which is unnecessary. The SciPy implementation of the SVD outputs the singular values as a 1D array. It works well enough with broadcasting and saves space, especially on a microcontroller. There is an option available in the SciPy implementation to output the full matrices using the parameter
full_matrices
to the svd function. I think we should do the same thing for parity reasons.
Yes, this is a good point, thanks for bringing it up! Here is an excerpt from the documentation:
full_matrices, bool, optional
If True (default), U and Vh are of shape (M, M), (N, N). If False, the shapes are (M, K) and (K, N), where K = min(M, N).
It seems to me that the current implementation is equivalent to the default (True) value.
Another consideration is singular value ordering. SciPy orders them largest to smallest, and I believe this is to make low-rank approximation and PCA easier since it amounts to slicing off the first few entries.
This is also true.
Either way though I haven't been able to get successful tests. This is my first time working on a collaborative project. I'm going to clean up my tests a bit, but how would you like me to make them available? Should I fork and PR?
Yes, please!
@v923z See PR #674.
It seems to me that the current implementation is equivalent to the default (True) value.
I did just did another check on this. This is actually not what this option does. The option switches between the full and reduced rank SVD. The reduced rank SVD does not compute columns of U and V that are associated with zero singular values.
SciPy always outputs the singular values in a 1D array from largest to smallest. I'll continue the conversation there.
Thanks for the clarification!
@v923z I'm still willing to throw some money your way to get this feature working so we can have homography support in ulab.
@v923z
Thanks for the clarification!
Hey sorry for the double misinformation, but I did another check! It's in fact not the reduced rank SVD! Apologies! I haven't ever touched that feature, but after some more reading I think it has to do with the Thin SVD. It's an optimization for m x n matrices that are especially tall or wide. I'm not too versed in this area, but according to Wikipedia, it says a step one QR decomposition is faster for tall/wide matrices which substantially speeds up the computation. I feel so silly getting it wrong twice.
It's probably worth implementing as that's a large use case for SVD where one has a large amount of data points but a low number of dimensions of variation (as is often used for PCA).
Not sure which LAPACK routine you're using, but SciPy defaults to gesdd
. I'm not sure how much it'll help to look at that LAPACK routine as the one that I could find for SciPy appears to be in Fortran. The bindings appear to be in this file: https://github.com/scipy/scipy/blob/0b7ab46f777e1086599a3b6e4029304bb6973860/scipy/linalg/flapack_gen.pyf.src#L352
Hey sorry for the double misinformation, but I did another check! It's in fact not the reduced rank SVD! Apologies! I haven't ever touched that feature, but after some more reading I think it has to do with the Thin SVD. It's an optimization for m x n matrices that are especially tall or wide. I'm not too versed in this area, but according to Wikipedia, it says a step one QR decomposition is faster for tall/wide matrices which substantially speeds up the computation. I feel so silly getting it wrong twice.
Oh, don't worry, we all make mistakes!
I've never used SVD, and I haven't experimented with the features in numpy
, so any insight is highly appreciated.
It's probably worth implementing as that's a large use case for SVD where one has a large amount of data points but a low number of dimensions of variation (as is often used for PCA).
Not sure which LAPACK routine you're using, but SciPy defaults to
gesdd
.
For various reasons, we can't link against LAPACK, so the SVD is implemented from scratch.
@v923z I'm still willing to throw some money your way to get this feature working so we can have homography support in ulab.
It's really not a question of money, but my ineptitude. I think the implementation (lifted from your code) is there, we've only got to iron out the details, and figure out how to make the return values numpy
-compatible. I'll try to allocate a bit of time for this. I'm also eager to get this done.
Yeah, now that we have 4D tensors enabled on every single OpenMV Cam (except the M4 which is out of flash space - but, also super old) we will be leveraging and telling people to use ULAB a lot more. Expect usage of the library to go up significantly.
@v923z - Still happy to pay for this feature to be worked on.
We've almost solved the other problem, this is next ;-)
I was going to comment that we could probably implement pinv
and solve_lstsq
with SVD. Not sure if it would be the best implementation though. Either way, I hope you found my tests helpful.
The tests are useful, as are your comments, thanks! I just have to sort out how we return the results. As for pinv
and solve_lstsq
, I was wondering, whether those could be added in https://github.com/v923z/micropython-ulab/tree/master/snippets.
If I understand you correctly, you'd call svd
in those two methods. Adding wrappers in the python
layer would make configuration of the firmware very flexible.
I need to be able to find eigenvalues on an asymmetric square matrix (as offered by numpy), although it seems that
ulab.linalg.eig
only works on symmetric square matrices at the moment. Is there a reason for this? I assume it's for computational reasons.Also, is implementation of SVD and/or QR factorisation in the pipeline at all? I see you use Givens rotations for
linalg.eig
so QR shouldn't be too bad.Thanks for all your amazing work! This is an awesome project.