Closed jbueltemeier closed 4 years ago
Right, I didn't think about that the nn.Conv2d
cannot accept a tuple
with 4 int
. However, I see the problem with the optional parameter input
that it is not available at the time of initialisation. For this reason I would suggest that this additional padding is done within the created ConvBlock
, similar to what I did in 174. So if padding="same"
and the kernel_size
even, there will be an additional padding before the nn.Conv2d
in the modules in ulyanov_et_al_2016.ConvBlock and sanakoyeu_et_al_2018.ConvBlock.
What do you think?
However, I see the problem with the optional parameter input that it is not available at the time of initialisation.
Hm, yeah I didn't think of that.
For this reason I would suggest that this additional padding is done within the created
ConvBlock
I think we could generalize this a bit more: we could implement a PaddedConv2d
that also takes "valid"
(default), "same"
, and "full"
as the parameter padding
. In forward
we could call torch.nn.functional.pad
and simply pass this output to Conv2d
. Thoughts?
Yeah, I'm okay with that.
@jbueltemeier I've settled for same size padding only. Please review if you find obvious shortcomings.
This was more work than I expected, but I think its done now. @jbueltemeier please review and merge if you are happy.
I've implemented the _SameSizeConvNdMixin
which should be used in double inheritance with any nn.Conv*
module. With it SameSizeConv2d
and SameSizeConvTranspose2d
is just an empty class definition.
Furthermore, this removes the now obsolete functions same_size_padding
, same_size_output_padding
and adapts all places where they were used.
As mentioned in #174.
If the
kernel_size
is even, thesame_size_padding
requires an additional right column at the height and/or an additional bottom column so that theinput_size
is not changed.