accord-net / framework

Machine learning, computer vision, statistics and general scientific computing for .NET
http://accord-framework.net
GNU Lesser General Public License v2.1
4.46k stars 2k forks source link

Inverse Matrix troubles #2147

Open artemus-tech opened 4 years ago

artemus-tech commented 4 years ago

Hello. I have troubles with inverse matrix calculation file of my matrix in attach [F100000000000.txt](https://github.com/accord-net/framework/files/4878499/F100000000000.txt)

So f.Inverse() That's very unlikely what should be as a result. I've made several tests:

  1. multiplication f.Inverse() and f - I can't say that's the result is unity matrix
  2. Compare with Numpy. So precision is high and the result is more realistic(unity matrix etc), I guess it's the most correct result in comparison with other libs
  3. Compare with Alglib inversion and Accord.NET is identical.
  4. Compare with LAPACK(add dll manually) - distinction in the third digits after point

I can't understand why, but numpy provide the best result, it should be changed) I don't want to use python.runtime.dll magic etc.

I can conclude that wrong results not directly bounds to the library itself. Perhaps the main reason it's precision, or round error. So I have additional question - how can I increase precision of calculation, for example additional 4 digits.

cesarsouza commented 4 years ago

Hi there,

Thanks for opening the issue. Have you tried using the PseudoInverse function instead? It should use the SVD, which is more costly, but may be a bit more precise if your matrix is near-singular.

Regards, Cesar

artemus-tech commented 3 years ago

Yeah, I've cheked it to. Result is worst.

I mean, is any way to use in calculation precision  around 18 digits after point, or that is double type limitation which is used by Accord.net? Because numpy allow it ant it's result more closely to the truth.

cesarsouza commented 3 years ago

Well, you can try to invert it using decimal instead of double. But I am not sure it will work. I believe this problem may be more related to the original algorithm implementations in alglib and JAMA rather than just an issue with floating-point precision...

Please let me know if it works with decimal types, though!