Closed dsn01 closed 9 months ago
Yes, I tried to use the same parameters as the paper but it does not work for me, so I tuned it
Hello, thank you for your replying in time. And I have some trouble in training the whole network with your code pipeline. I can't get a good result as showed in your display. If the VGG network's weights update too in the process of BP? I train the network in many steps so I doubt I did't save the weights in VGG in advance. Now I find that the loss of G decrease so slow but the loss of D decrease to zero approximately. Can you give me some advice to me? Thank you sincerely!
------------------ 原始邮件 ------------------ 发件人: "ptran1203/pytorch-animeGAN" @.>; 发送时间: 2021年12月4日(星期六) 中午1:59 @.>; @.**@.>; 主题: Re: [ptran1203/pytorch-animeGAN] About reintroducing AnimeGAN (Issue #7)
Yes, I tried to use the same parameters as the paper but it does not work for me, so I tuned it
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
I dont train VGG net, its weight is frozen. May be you need to tune learning rate and loss weight of G and D. I only make a good result on Hayao dataset
Hello, @Zealist-boy have you found any solution to what problem you were facing. I tried training the kimetsu dataset with another video file but as you said the G decreases at about 0.01 per batch and the D is just too low. The inference images look somewhat different but not transformed. I have done about 60-80 epoch.
After reading the primitive paper and analyzing your code, I find that the generator network in your code is not the same as said in the paper, and your learning rates of G and D are not equal to that in their paper. Also weights of loss. Why? Is this based on your practical experience ? Thank you!