Open songbo0925 opened 3 years ago
Hi @songbo0925 Thanks for your attention on our paper. The weight is shared between dis_a and dis_b, so as gen_a and gen_b. Therefore, only one optimizer is needed.
Hi @layumi
Thanks for your wonderful work and reply.
In line 180-181 of trainer.py, gen_b
is set to be the same as gen_a
just this once. But in each iterations
, if just the parameters gen_a
is updated, then in next forward gen_b
and gen_a
may have different parameters. So I want to know how you achieve weight sharing?So as dis_a
and dis_b
. Maybe I ignored some code, please advise.
https://github.com/NVlabs/DG-Net/blob/a067be117b43c7c553275b6570b3a3bf8da465e0/trainer.py#L180-L181
In trainer.py, why only update the parameters of
dis_a
andgen_a
and ignore the parameters ofdis_b
andgen_b
? https://github.com/NVlabs/DG-Net/blob/a067be117b43c7c553275b6570b3a3bf8da465e0/trainer.py#L242-L248