Closed ShaoTengLiu closed 1 year ago
The results look acceptable at step 200. However, it changes to the original video at step 500. Could you please give me some hints on this problem? Thank you very much!
I'm getting strange results as well. I'm running the exact same settings, but every single image that I generate looks like exactly your first one (steps 100-500). It never converges. Did you change anything in the .yaml
config to get these results?
Also, in this instance it looks like overfitting. You may be able to resolve this by lowering the learning rate.
Hi, thanks for your suggestions.
I don't change anything, just using sample.yml. My environment is torch 1.12.1 + cuda 11.3
Using prior_preservation can prevent overfitting to some extent. However, the results are still not stable.
I try to set prior_preservation from 0.1 to 1 but cannot get the results shown in the readme.
The model does overfit easily. As @ExponentialML mentioned using smaller learning rates helped in my case. These are the learning progress I got with the sample.config:
step 0 step 100 step 200 step300 step400 step500
Thanks for your reply!
I find the cuda version important. Changing to cuda 11.6 solves my problem.
issue closed.
Hi, thanks for this interesting implementation!
I run the given script on a 3090 and get the following results:
Step0:
Step200:
Step500:
The results look acceptable at step 200. However, it changes to the original video at step 500. Could you please give me some hints on this problem? Thank you very much!