Closed erosten closed 5 years ago
Ya, I think that's a limitation of the original model architecture from the paper.
Check out the any_input_size branch. I changed the architecture slightly to get around this problem. I don't have performance numbers for that branch though, never got around to train the network
I ended up just resizing to 256x256, but will check out that branch at some point. Thanks for the response!
Hi, I'm trying to train on a custom dataset of size 300x300, and errors are being thrown in the second DownSamplingBottleneck as when concatenating the size of main and ext, there's a size mismatch.
This is due to the fact that after the first downsampling bottleneck, 300x300 becomes 150x150 for both main and ext.
During the second downsampling bottleneck, main becomes 38 since (75 + 2x1 - 1x(3-1) -1)/2 + 1 = 74/2 + 1 = 38.
However, ext becomes (75-0x1 - 1x(2-1) -1)/2 + 1 = 73/2 + 1 = 37.5, which is floored.
Do you have any suggestions to get around this? I'm relatively new to pytorch.