Closed avdhoeke closed 7 months ago
Hi,
thanks for reporting this. I've reproduced your bug successfully on this branch.
I did a quick queck of the code to rule out the possibility of the padding
argument not being correctly passed downwards.
Need to think about what causes this bug. Any help would be appreciated. It seems like the outputs from PyTorch's fold
are much smaller in magnitude.
I think I've found and fixed the problem. Could you install from the bug30-fold-with-padding
branch and verify that this fixes it?
Cheers, Felix
Problem fixed. I also manually checked using another configuration:
# random output of an im2col operation
inputs = torch.randn(64, 3 * 2 * 2, 7 * 11)
output_size = (4, 8)
# other module hyperparameters
kernel_size = 2
dilation = 1
padding = 2
stride = 1
and both outputs match.
Thanks for the quick and efficient fix!
Cheers,
Arthur
No worries, thanks for the clean report! Gonna create a new release today.
It seems like changing the
padding
argument from the example makes thetorch.nn.functional.fold
andFoldNd
outputs diverge:What is even more surprising is that those tensors agree except for element 0:
Any idea where this may come from?