layumi / Seg-Uncertainty

IJCAI2020 & IJCV2021 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo
https://arxiv.org/abs/1912.11164
MIT License
386 stars 51 forks source link

Question about stage 3 #5

Closed wtupc96 closed 4 years ago

wtupc96 commented 4 years ago

Hi, thanks for your great job! Recently I've been reading your code and I have a question about stage 2(rectifying). You set lambda_adv_target1 and lambda_adv_target2 to 0 which means there is no adv training in stage 2(Right?), but you keep training generator with false instruction from discriminator(the weight of discriminators is not loaded in stage 3), you annotated here which means you keep training G, but here you never updated D, is this the right behavior or maybe I misunderstood sth?

layumi commented 4 years ago

Hi @wtupc96

Yes. In the stage 2, I do not train the D (by setting loss weight to zero) and keep training G.

wtupc96 commented 4 years ago

But what about the false(I think) instruction from the random initialized D when training G?

发自我的小米手机 在 Zhedong Zheng notifications@github.com,2020年6月11日 下午6:04写道:

Hi @wtupc96https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fwtupc96&data=02%7C01%7C%7Cc4528ae46e5b49d2ecfe08d80deed1ae%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637274666649084655&sdata=M%2F4ceqWqKN789%2Bc1HWCjp5ZMTBFmfIRvFN%2BzDPcJ6ZM%3D&reserved=0

Yes. In the stage 2, I do not train the D (by setting loss weight to zero) and keep training G.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Flayumi%2FSeg-Uncertainty%2Fissues%2F5%23issuecomment-642546895&data=02%7C01%7C%7Cc4528ae46e5b49d2ecfe08d80deed1ae%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637274666649084655&sdata=yRvk8vS7%2BBCGv7kKjRxB2T%2BACnwjGGzelC9w%2FOKDSMI%3D&reserved=0, or unsubscribehttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADXWRYGXAN7FG66LUOKOVOTRWCT2RANCNFSM4N3HQ23Q&data=02%7C01%7C%7Cc4528ae46e5b49d2ecfe08d80deed1ae%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637274666649094648&sdata=a5e1n2IIGiq1gwz5OKYA7yMggBdR84og1AIHwdkzU%2Fk%3D&reserved=0.

layumi commented 4 years ago

Sorry. I may not get your question.

In the stage 2, D loss is set to zero in this line.
https://github.com/layumi/Seg-Uncertainty/blob/master/trainer_ms_variance.py#L223 So discriminator will not affect G anymore.

I kept the loss calculation but it will not be back propagate to G.

wtupc96 commented 4 years ago

Oh, I see. Thanks for your reply!