Open Bard2803 opened 1 year ago
Hi, sorry for the late reply.
The size mismatch issue is caused by the downsampled features and upsampled features from the 'UNet-like' skip connection.
A toy example is:
(1) input feature x
with spatial size 7,
(2) then downsampled by 2x (with padding) and get a downsampled feature x_down
with spatial size 4,
(3) upsample the x_down
by 2x and get upscaled feature x_up
with spatial size 8,
(4) add x_up
with x
and meets spatial size mismatch issue (8 vs 7)
The solution is to check the size of the input image and pad the image if needed. For example, we can pad the x
in the above example to 8. The key code is here.
Hope this helps you.
Hello and thank you for you work.
My input image is 1360x1814 and I get the following error:
tensor's
a
shapetorch.Size([1, 144, 452, 680])
tensor'sb
shapetorch.Size([1, 144, 453, 680])
That happens on some i'th forward step. it is quite strange, that the mismatched happens, isnt it ?