junyanz / pytorch-CycleGAN-and-pix2pix

Image-to-Image Translation in PyTorch
Other
23.19k stars 6.33k forks source link

CycleGAN produces identical images from input.(Synthetic to real translation)[image level adaptation #1285

Open itsMorteza opened 3 years ago

itsMorteza commented 3 years ago

I was trying to experiment with Domain transfer and cycle-consistent image-level adaptation with CycleGAN. Regarding the CyCADA's result [in Implementation details] and issue #586 the hyperparameters are mentioned in practical details (due to some issues in the Torch area) I imitated them in the PyTorch version (Resnet -9 block + basic discriminator); nonetheless, the results were the same as the input image. I tried load size 1024 and crop size 400 for my data(more than 60k images) even so, after 20 or 40 epochs the FID results didn't improve. So I tried different scenarios to make it work, Such as:

Visually, same as for issue #1080 , I reached some inconsistent domain transfer where the generated images are look-alike as input images and the FID and KID scores didn't perform well for any of them. I attached the basic loss graph (with GTA2Cityscape config) that may help to find the problem. total_loss

FID(generated, target domain) > FID(input domain, target domain). @junyanz @taesungp Do you have any suggestions? Thanks

junyanz commented 3 years ago

Sorry for the late reply. Is your task similar to GTA2Cityscapes? Do they have similar camera angles between source and target domains as well as to object category distributions? I noted that it will not work very well if source and target domains have different camera poses. It's quite hard to know the issues without seeing the data. Feel free to send your images here or via email.