Closed lch1999 closed 1 year ago
It has been a long time. If I remember correctly, I resize the image to 480*480 for training due to the memory limits of the GPU. For the inference, I also try to resize the images with maximum resolution. After inference, I resized all the generated images to (2048, 1024) the same as the original size.
After that, the shorter side of the image will rescale to 600 when training the domain adaptation model.
Hope I can solve your problem. If you have any other questions, feel easy to provide further information.
The size of cityScapes' images are (2048, 1024), but the script prepare_cityscapes_dataset.py provided by CycleGAN resizes them to (256, 256). I want to ask which one you chose as the input size of CycleGAN, (2048, 1024) , (256, 256) or other size?
And according to your paper, you rescale all images by setting the shorter side of the image to 600 while keeping the image aspect ratios. So, the input size of UMT is (1200, 600) when using cityScapes?
Looking forward to your reply. Thank you very much.