logix-project / logix

AI Logging for Interpretability and Explainability🔬
Apache License 2.0
74 stars 6 forks source link

Investigating torch.linalg float64 operations #62

Closed pomonam closed 7 months ago

pomonam commented 9 months ago

Previously, I noticed that there is a large performance gap when doing torch.linalg operations (e.g., eigendecomposition, SVD) using float32 vs. float64. The current codebase uses float32 (or the original dtype of the Tensor), but it might be worth exploring using higher precisions.

sangkeun00 commented 7 months ago

By default, we now perform torch.eigh with float64, and then revert the result back to the original dtype. This change applies from the commit 20a249fc8a7c4a88196267d7b435f964e75a68fd .