Closed ChengBinJin closed 4 years ago
In this case, you could decrease the --learning_rate
(i.e., ten times smaller) in invert.py
. By the way, we have updated the encoder network and its weight in the repo, you could try it without adjusting the learning rate.
@zhujiapeng After updating the new encoder network and its weight, I got the following error. Did you encounter this problem? ModuleNotFoundError: No module named 'tensorflow.contrib.nccl
We added Synchronized BatchNorm in the encoder, it will use nccl
in TensorFlow, the tf version I trained encoder is 1.12, so you need to change the tf version to 1.12.
@zhujiapeng Thank you for your quickly reply. I currently use TensorFlow 1.14.0, I will try it on 1.12. 👍
@zhujiapeng I success to run the new model using TF 1.12.2 as your suggestion. Even though the image, 00015.png still has a similar problem, for others the new model improves a lot. Great work! I will close this issue.
@ChengBinJin I picked the wrong model when uploading, I have updated it, sorry about that. The new model's inversion results show as follows, but this problem may happen on some other images, this caused by the small batch size when training the encoder due to our GPU memory. Anyway, thank you for testing our model!
@zhujiapeng I tested the new model again, now the results are exactly the same as yours! Great work!
@zhujiapeng @ShenYujun @zhoubolei I like your works including InterFaceGAN.
I test some FFHQ images (from 00000.png to 00029.png, 30 images) using the In-Domain GAIN Inversion method. I found for some examples, the output of the encoder seems good but during some iterations that the results do not seem correct. The following two images using 100 iterations and 1,000 iterations. But the result even becomes worse. For example, the problem of 00000.png was corrected during long iterations, but 00002.png no big difference. In addition, the 00015.png becomes worse. Do you have some ideas about this?