Open rose-jinyang opened 2 years ago
Hi. You are correct that it is better to not change the ratio. I'd suggest to first scale the image such that the smallest side has the correct size and then crop the image to a square. In this way, the pixels are uniformly scaled in both the x and y dimension. For example, for an image of size (256x128), first scale to (512x256) and then crop to (256x256).
Thanks for your reply. One more question. How should I resize & crop diverse images in inference step?
Hello How are you? Thanks for contribution to this project. I'm NOT sure if this RMI loss would work well in case that we resize the image & mask to input size(NxN in pixels) without keeping width-height ratio in the data augmentation step. I am working on image segmentation project. There are many images & masks with different sizes in my dataset. The data by dataloader are resized to input size(ex: 256x256) and feed into the model. So the original width-height ratio of image & mask are NOT kept. Even in such case, does this RMI loss work well?