Closed Liu-1994 closed 4 years ago
Thank you @Liu-1994 .
‘f’ is the appearance code for image generation; We do not want the generation losses to update f
.
Thus, we use the detach
here. In this way, f
is mainly updated via the re-id related losses.
@layumi Thanks for replying. I understood that.
And I have another small question. What is the role of the loss_gen_recon_f_a
and loss_gen_recon_f_b
?As f
is the input of the generator, I think the two losses may not update G
.
Input -> Appearance Encoder(No Update) -> f (Detach)-> Decoder (Has Gradient) -> Generated Image (Has Gradient) -> Appearance Encoder(No Update, But has gradient) -> f
@layumi Thanks for teh replying. I have understood the process except the last step.
As thef.detach()
is executed in the ft_netAB.forward()
. I think the final f
in the above process is also detach
so there is no back propagation for f -> Appearance Encoder. When I debug the project, the grad_fn
of f_a
and f_a_recon
are both None.
Thanks for the great suggestion. I have not checked it yet. It may not work. In fact, the feature reconstruction loss has a similar role with the ID recon loss.
Thank you very much for your reply. I have nothing else to ask about this issue. By the way, if you are convenient, could you please take a look at another issue (#39 ) I raised.
Hello, thank you very much for providing the implementation code of the DG-Net model. I encountered some problems during the implementation of the project. I will be honored if you can give me some suggestions.
I found the f=f.detch() in the ft_netAB. This causes the vector f to have no gradient. Then the loss_gen_reconf* has no no contribution to model parameter update. Is there something wrong with me or something wrong with the code?
I will be grateful if you can give me some suggestions. Thank you!