Closed wuyongfa-genius closed 3 years ago
Hi @wuyongfa-genius
We have not included a padding parameter in the current version as it was not something that required special care code-wise (and thus of high priority).
You can simply call any of the padding torch.nn
classes [ref link] or torch.nn.functional.pad()
[ref link] during your forward pass.
Best, Alex
Thanks for your reply... I was being silly...I noticed you just replaced the odd kernel_size with an even one, such as replace 3 with 2 in maxpool. And in fact it is not something that matters too much. Again thanks for your excellent work.
I wonder why there is no parameter 'padding' in softpool function. When the kernel size can not be divided by input size, what will you do? And what should I modify so that i can use padding.