zsyzzsoft / co-mod-gan

[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks
Other
444 stars 67 forks source link

Parameter tuning and re-implementation with Pytorch #19

Open htzheng opened 3 years ago

htzheng commented 3 years ago

First, thank you for the impressive work! Currently, I am re-implementing a pytorch version of co-mod-gan, and I have several questions regarding the model:

  1. Have you tried different R1 regularization? Empirically, I found that when using a small R1 than 10, the convergence of l1 loss is faster, I wonder if you tried other R1 weights?
  2. Will dropout of the global code improves the performance?
  3. Have you tried adding a skip connection to the encoder?
  4. Also why the style mixing weight is set to 0.5?

Thanks

zsyzzsoft commented 3 years ago

Unfortunately I may not have useful information regarding your questions. Most of the hyperparameters were only chosen by intuition as we didn't have much resource to run the experiments.

styler00dollar commented 3 years ago

This sounds amazing @htzheng. I am looking forward for the code. Good luck. I would love to try out Pytorch code, since Tensorflow 1 is painful to work with. Didn't manage to use Tensorflow 2 or convert the model to onnx, which makes co-mod-gan impossible to use with new GPUs. With Pytorch the usage should be easy and I could add it to my own code.

htzheng commented 3 years ago

@styler00dollar It is still hard for me to release the code while I am doing the summer internship, but I will try releasing the code after September. You could try modifying the training code and model from https://github.com/rosinality/stylegan2-pytorch