Closed VincentXWD closed 6 years ago
The reason for this is that each convolution eats up a few pixels on each side, so after doing all the convolutions on the initial shape of 572x572, the end result becomes 388x388.
If we had started with 388x388, the result would have been smaller, not matching the ground truth images. So we do this padding to compensate for the lost pixels during convolutions
I'm reading codes in https://github.com/IBBM/Cascaded-FCN/blob/master/notebooks/cascaded_unet_inference.ipynb and I feel confusing in _step1_preprocess_imgslice, this bold comment: 6- Pad img slices with 92 pixels on all sides (so total shape is 572x572) I wonder know why this preprocess needs to pad image with 92 pixels. I didn't find any reasons in paper. Please tell me. Thanks.