Closed wtupc96 closed 4 years ago
Hi @wtupc96
Yes. In the stage 2, I do not train the D (by setting loss weight to zero) and keep training G.
But what about the false(I think) instruction from the random initialized D when training G?
发自我的小米手机 在 Zhedong Zheng notifications@github.com,2020年6月11日 下午6:04写道:
Yes. In the stage 2, I do not train the D (by setting loss weight to zero) and keep training G.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Flayumi%2FSeg-Uncertainty%2Fissues%2F5%23issuecomment-642546895&data=02%7C01%7C%7Cc4528ae46e5b49d2ecfe08d80deed1ae%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637274666649084655&sdata=yRvk8vS7%2BBCGv7kKjRxB2T%2BACnwjGGzelC9w%2FOKDSMI%3D&reserved=0, or unsubscribehttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADXWRYGXAN7FG66LUOKOVOTRWCT2RANCNFSM4N3HQ23Q&data=02%7C01%7C%7Cc4528ae46e5b49d2ecfe08d80deed1ae%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637274666649094648&sdata=a5e1n2IIGiq1gwz5OKYA7yMggBdR84og1AIHwdkzU%2Fk%3D&reserved=0.
Sorry. I may not get your question.
In the stage 2, D loss is set to zero in this line.
https://github.com/layumi/Seg-Uncertainty/blob/master/trainer_ms_variance.py#L223
So discriminator will not affect G anymore.
I kept the loss calculation but it will not be back propagate to G.
Oh, I see. Thanks for your reply!
Hi, thanks for your great job! Recently I've been reading your code and I have a question about stage 2(rectifying). You set
lambda_adv_target1
andlambda_adv_target2
to 0 which means there is no adv training in stage 2(Right?), but you keep training generator with false instruction from discriminator(the weight of discriminators is not loaded in stage 3), you annotated here which means you keep training G, but here you never updated D, is this the right behavior or maybe I misunderstood sth?