Open stalhabukhari opened 3 years ago
Hi, I have faced the same issue. Have you solved this issue?
@liujingcs Ah! It has been a long while. I think I upgraded the PyTorch version (probably 1.8
).
If nothing works, you may want to check out this work by the same group: https://github.com/noahgolmant/pytorch-hessian-eigenthings
Hi, I have faced the same issue. Have you solved this issue?
Hi, I wonder if you have solved this issue. Thanks so much.
Hi guys, I meet the same issue and just figure it out. In my case, it's because there are some layers that have been defined in the model but didn't participate in forward or backward propagation. The issue was fixed after I delete the unused layers. Another way to resolve it is modifying the get_params_grad function in the utils of pyhessian library. When the grade is None, the grads should be a Tensor of zeros instead of float zeros.
Hi!
Thank you making the source code of your work available. I tried to use the library for an application involving a 3D network architecture, and ran into the following issue:
Interestingly, the issue does not occur at the first call to back-propagation via
loss.backward()
, rather occurs at the call totorch.autograd.grad()
.I believe that the
float
object in question is the0.
manually inserted whenparam.grad is None
in the following routine:https://github.com/amirgholami/PyHessian/blob/c2e49d2a735107a5d7ce2917d357d7a39b409fa4/pyhessian/utils.py#L61-L72
If I am right, it is even more mind-boggling that a type(I mistakenly mixedfloat
is able to pass the check for data-type in PyTorchoutputs
andinputs
arguments oftorch.autograd.grad
). Kindly guide about what I can do here.P.S.
hessian_analysis.py
is a wrapper I wrote around the library, for my use-case. I verified the wrapper by running a 2-layer neural network for a regression task.