zsyzzsoft / co-mod-gan

[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks
Other
445 stars 67 forks source link

some bad cases #15

Open yezhanglang opened 3 years ago

yezhanglang commented 3 years ago

Hi, I have used the co-mod-gan-places2-050000.pkl model to restore some food images. In most cases, the result is really good. However, there are some bad cases. How can I fix them? Just retrain the model with some food images? Or something else?

Here are some badcase. The mask is in the middle of picture:

1619575587600-3342b61a-d6ce-443e-b69d-bb15a3ab4031

1619575495482-da0d3c34-5271-4ae3-ad30-348ec9008a01

zsyzzsoft commented 3 years ago

Modeling long-tail distributions is generally hard for generative models. With the current state of the art, I'm afraid that the only solution is to reduce the variety of the dataset to make it work better on a specific type of images.

tiwarikaran commented 3 years ago

Hi @yezhanglang can you share how did you manage training?

ImmortalSdm commented 2 years ago

@zsyzzsoft I'm facing worse result after 2960k iter. Any ideas? https://s2.loli.net/2022/03/13/u8qtRLrPWwgBGyZ.png Besides, is the fake_init normal? Seems the masks are not binary. https://s2.loli.net/2022/03/13/O7SwvqP9Mh3TmKl.png

zsyzzsoft commented 2 years ago

If the FID metric suddenly spikes up at some iteration, you can try resume training from the last normal checkpoint. Sometimes this happens because of GPU issue. The fake_init looks normal.

ImmortalSdm commented 2 years ago

If the FID metric suddenly spikes up at some iteration, you can try resume training from the last normal checkpoint. Sometimes this happens because of GPU issue. The fake_init looks normal.

Thanks for your quick reply! The FID smoothly drops. But all the training samples seem bad, I have no idea what happened as the fake_init looks normal.