Open dashstander opened 2 weeks ago
If I have a linear operator A on GPU and call Ident = cola.ops.I_like(A), then the new identity operator will be placed on CPU.
A
Ident = cola.ops.I_like(A)
This is annoying because if combined functions like cola.eig(Ident - A, k=1) will fail because the the tensors are on different devices.
cola.eig(Ident - A, k=1)
Code snippet to reproduce
import torch import cola print(f'CUDA Available {torch.cuda.is_available()}') A = cola.ops.Diagonal(torch.randn((200, 200))) A = A.to('cuda') Ident = cola.ops.I_like(A) print(f'A: {A.device}') print(f'Ident: {Ident.device}')
Stack trace/error message
CUDA Available True A: cuda:0 Ident: cpu
For I_like(A).device == A.device.
I_like(A).device == A.device
Please complete the following information: CoLA: 0.0.5 Pytorch: 2.3.0+cu121 Python: 3.11.9 Ubunutu 24.04
0.0.5
2.3.0+cu121
3.11.9
🐛 Bug
If I have a linear operator
A
on GPU and callIdent = cola.ops.I_like(A)
, then the new identity operator will be placed on CPU.This is annoying because if combined functions like
cola.eig(Ident - A, k=1)
will fail because the the tensors are on different devices.To reproduce
Code snippet to reproduce
Stack trace/error message
Expected Behavior
For
I_like(A).device == A.device
.System information
Please complete the following information: CoLA:
0.0.5
Pytorch:2.3.0+cu121
Python:3.11.9
Ubunutu 24.04