However, when I try to implement Mixed-Precision training with torch.cuda.amp (with torch 1.11), I have an error in the backward method.
File "<path>/model/pac.py", line 179, in backward
grad_in_mul_k = torch.einsum('iomn,ojkl->ijklmn', (grad_output, weight))
File "/opt/conda/lib/python3.8/site-packages/torch/functional.py", line 325, in einsum
return einsum(equation, *_operands)
File "/opt/conda/lib/python3.8/site-packages/torch/functional.py", line 327, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: expected scalar type Half but found Float
I am using the th14 branch. Is there a fix or a work-around for this?
You may please delete this issue as this is a duplicate. Unfortunately, I cannot delete once posted.
Hi,
I am trying to use pacnet in my model like in this example: https://github.com/NVlabs/pacnet/blob/12d52b6ebdd8e8afa0d2e54486ba77fbb3697a53/task_semanticSegmentation/fcn8s.py
However, when I try to implement Mixed-Precision training with torch.cuda.amp (with torch 1.11), I have an error in the backward method.
I am using the
th14
branch. Is there a fix or a work-around for this?You may please delete this issue as this is a duplicate. Unfortunately, I cannot delete once posted.
Thanks!