Closed lucmos closed 2 years ago
Thank you for raising the issue, @LucaMoschella.
The matrix appears to be ill-conditioned (which can be validated by the additional context that you have provided), in which cases single precision algorithms for computing eigenvalues aren't reliable.
However, if one where to compare the results of NumPy and PyTorch, it should be duly noted that the way NumPy computes the eigenvalues for single precision matrices is by converting it to double precision, performing the operation and then converting the result back to single precision. A relevant function used in SciPy is the scipy.linalg.lapack.find_best_lapack_type
which determines the optimal type for a variety of linear algebra methods.
Closing this issue as rhis is expected. In many iterative algorithms, double precision is necessary when dealing with singular matrices or matrices with a small eigengap, and there's not much we can do about that as it comes from the numerical properies of the algorithm.
🐛 Bug
symeig
produces wrong eigenvalues on some matrix intorch.float
precision, both on cuda and on cpu.To Reproduce
I'm not sure about what causes the unstability on some matrices.
Steps to reproduce the behavior:
You can see the output of the notebook here, where float precision gives negative eigenvalues.
Expected behavior
float precision and double precision should give similar results
Environment
Additional context
The matrix is (theoretically) known to have positive eigenvalues and the first one should be (close to) zero.
cc @vishwakftw @SsnL @jianyuh