Hi, I want to use pix2pix architecture with the Cifar10 dataset and the problem is the input size.
I do not want to resize the images because it is computationally expensive and it takes a lot of time for each epoch.
I want the architecture of the model(Generator and Discriminator) be suitable for input size, so I want to know is there any rule to modify the model for this purpose? Because I did it but the output result wasn't satisfying and like before the modifications. I was thinking that it might exist some logic behind designing the U-net and Patch-net architecture(number filters, layers, ...) that I hope you to help me understand redesigning the model that results in the same output as before for this dataset.
Thank you so much.
For the generator, you can remove the first two downsampling layers and their corresponding upsampling layers from defineG_unet_128. For the discriminator, remove one or two downsampling layers. If you are not familiar with Lua, you may want to modify our PyTorch code.
Hi, I want to use pix2pix architecture with the Cifar10 dataset and the problem is the input size. I do not want to resize the images because it is computationally expensive and it takes a lot of time for each epoch. I want the architecture of the model(Generator and Discriminator) be suitable for input size, so I want to know is there any rule to modify the model for this purpose? Because I did it but the output result wasn't satisfying and like before the modifications. I was thinking that it might exist some logic behind designing the U-net and Patch-net architecture(number filters, layers, ...) that I hope you to help me understand redesigning the model that results in the same output as before for this dataset. Thank you so much.