Closed themightyoarfish closed 5 years ago
It's true that diff
and lambda_estimate
are torch.Tensor
s, but I have not run into any issues applying np.abs
here. Each tensor only contains a single scalar. What error are you getting? Can you provide the numpy and torch versions?
I face this issue when either both or lambda_estimate
, the dividend is a CUDA tensor. If the divisor or both are cpu tensors, the problem doesn't occur.
If both are CUDA:
*** TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
If diff
is on CPU:
tensor(1.)
If diff
is CUDA, and lambda_estimate
is CPU:
*** RuntimeError: Expected object of backend CUDA but got backend CPU for argument #1 'self'
My PyTorch version is 0.5.0a0+a853a74
, my numpy version is 1.15.0
.
I see. I tested this on PyTorch 0.4. Either way it would be better to make the eigenvalue a float immediately. Will fix this in a sec.
https://github.com/noahgolmant/pytorch-hessian-eigenthings/blob/7f3ec4659e093ef1c6c0bed6325ba4a2a3f2477b/hessian_eigenthings/power_iter.py#L94
I'm not sure if my fork is the problem, but this line seems to assume that
diff
andlambda_estimate
are numpy arrays, while they will betorch.Tensor
, possibly on the GPU.