taesungp / contrastive-unpaired-translation

Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
https://taesung.me/ContrastiveUnpairedTranslation/
Other
2.23k stars 417 forks source link

Training FastCUT #94

Open JoonyoungSong opened 3 years ago

JoonyoungSong commented 3 years ago

First of all, thank you for your great work.

I'm trying to train FastCUT for Cityscapes, Cat2Dog, and Horse2Zebra using your code.

However, I have found the loss of G_GAN always converges to 1 when using the default setting for FastCUT (lambda_X = 10, lambda_Y = 0).

In addition, the reconstructed output does not show any significant changes compared to the input image. Specifically, for the Horse2Zebra, the output is almost the same as the input. Also, the FID is too different from the value reported in the paper (FID=252.4 for horse2zebra).

When I used lambda_X = 1 and lambda_Y=0, the loss curve looks reasonable (oscillating around 0.2~0.6) and the output image shows significant changes. Also, the FID is 60.5 for horse2zebra.

I suspect the lambda_X value is too large or I'm missing something when training FastCUT in default mode. I used the following command : python train.py --dataroot ./datasets/horse2zebra --name horse2zebra_FastCUT_v1 --CUT_mode FastCUT --display_port 8098 --gpu_ids 0

Thank you.