Open kuacakuaca opened 4 months ago
What is this issue about? You think this is a problem that even kernel sizes are not permitted? So you want to change that?
What is this issue about? You think this is a problem that even kernel sizes are not permitted? So you want to change that?
I want to use even kernel size for the pooling layers, is there any reason we don’t want to allow this?
As far as I remember, there were technical reasons why it did not work. (I actually was against having the check here at that point, because this makes it unclear why it was not allowed. Just trying it out shows the problem much more clearly.) Maybe it's actually not a problem anymore in a more recent PyTorch version? I don't know. Just try it out.
I think the limitation (from PyTorch side) was only for Convolution with "padding=same"...
thanks! I'll just try.
https://github.com/rwth-i6/i6_models/blob/3c9173691521778b1e8b4070c172cbe929e4826b/i6_models/parts/frontend/generic_frontend.py#L88