Open prottmann opened 4 years ago
@prottmann Did you happen to find a work-around? I am trying to use this in my model and I am facing the same error with Float and Half.
Setting PacConv2d(..., native_impl=True)
resolves this issue for me. This uses a different implementation which is "using only standard pytorch layers/operations" and those already support autograd/autocast. It might have a bit higher peak memory usage, as pointed out by the author.
Note that I'm using the th14 branch
Are there any plans on adding support of autocasting of Pytorch 1.6? Currently the weights and kernels are sticking to float and the gradients are half which give a type missmatch error in backpropagation.
For running my example i'm using the code of the 1.4 branch.