Previously, I noticed that there is a large performance gap when doing torch.linalg operations (e.g., eigendecomposition, SVD) using float32 vs. float64. The current codebase uses float32 (or the original dtype of the Tensor), but it might be worth exploring using higher precisions.
By default, we now perform torch.eigh with float64, and then revert the result back to the original dtype. This change applies from the commit 20a249fc8a7c4a88196267d7b435f964e75a68fd .
Previously, I noticed that there is a large performance gap when doing
torch.linalg
operations (e.g., eigendecomposition, SVD) usingfloat32
vs.float64
. The current codebase usesfloat32
(or the original dtype of the Tensor), but it might be worth exploring using higher precisions.