Closed linzhiqiu closed 5 years ago
I think you basically can do with variable sizes, but the segmentation result also varies in scales, just as you did. That's why a multi-scale testing is officially employed, which scales the input images by ranges from 0.5 to 1.75 and averages them, motivated to ensemble the jittered results. I also do in eval.py
but not in demo.py
.
I wonder whether a pretrained PSPnet is able to handle variable size input. I am using the ade20k model, and I noticed that the input image to the model is by default 473x473. And if I changed it to 512x512, then the segmentation mask would be somewhat off by a few pixels.
I wonder whether I should still input the image as size 473x473, but later using bilinear upsampling to enlarge the mask to 512x512?