taesungp / contrastive-unpaired-translation

Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
https://taesung.me/ContrastiveUnpairedTranslation/
Other
2.19k stars 416 forks source link

Why my generator seems to cheat my discriminator by generating a same picture? #181

Open tianjilong123 opened 7 months ago

tianjilong123 commented 7 months ago
image image

I want to ask that why my generator seems to generate a same output picture in some epoch. The loss function remains a relatively stable value? Is it a mode collapse? Is thera anyone can tell me how to handle with it. Thanks for your replies!

Bananaspirit commented 5 months ago

@tianjilong123 Hello! I have the same problem, tell me, did you manage to solve it?

tianjilong123 commented 5 months ago

@tianjilong123 Hello! I have the same problem, tell me, did you manage to solve it?

I didn't continue using this code. But I guess, maybe the size of the patch doesn't match the input images, too big or too small, which cauesed the output focus on a local feature.

Bananaspirit commented 5 months ago

@tianjilong123 Interestingly, I now have the same assumptions about the size of the patch. I'm trying to apply the texture of real fields to a field from the simulator. For the first 50 epochs, the generator tries to preserve the structure of the image from the simulator, but then something strange happens - the generator simply recreates the original field image, ignoring the structure.

Bananaspirit commented 5 months ago

Unfortunately, the authors stopped answering questions and problems a long time ago, so we have to search through fragments.

DongDongNuNa commented 3 months ago

@Bananaspirit Hi, did you changed the number of layers(nce_layers) or number of patches(num_patches @networks/netF)? I found that if we reduce the number of patches or number of layers too much, geneated images are tend to look like that(make square in center, tend to look same...)