Closed catherineyeh closed 2 years ago
Could it be that the transformations/preprocessing steps need to be modified?
I believe that it could be that the BilinearResize
assumes the image has three channels and you are giving it a single channel input. You can try converting your grayscale images to RGB (using something like Image.open("image.png").convert("RGB")
and if that doesn't work, you could try replacing the BilinearResize
with a different downsampling function that behaves correctly.
Hi all, I've tried to perform the super-resolution task with a trained stylegan2-ada-pytorch as the generator. However when training, the images stored in the checkpoints folder look like this below:
I'm not entirely sure if the image under "Inputs" are downsampled correctly when training. Could it be that the transformations/preprocessing steps need to be modified? The training images were greyscale 256x256.