Closed juliendenize closed 3 years ago
In the file of 'configs/latest.yaml', recon_s_w and recon_f_w are set 0, this is strange. Are the two reconstruction loss not used during training?
@shuxjweb in the trainer you can check that at each iteration after several thousands iters the weights of these losses are increased up to a certain weight
Hi @juliendenize , Sorry for the late response.
Yes. We detached the id embedding to prevent the gradient to E_appearance, which may compromise the re-id performance. The gradient of loss_gen_recon_f_a
mainly works for optimising the decoder.
Hi @shuxjweb
Sorry for the late reply. Actually, we use the losses in one warm-up manner., as @juliendenize mentioned.
@layumi thank you for your answer. Indeed, I tried to remove the detach function and the GAN collapsed, so I get why you did that and it is truely inspiring. However, I don't see how the gradient of loss_gen_recon_f_a
would optimize the decoder because both f_a_recon
and f_a
are detached.
Thanks. @juliendenize Yes. You are right.
The gradient of loss_gen_recon_f_a
would not optimize the decoder, due to the detach
. I think I just left the api for ablation study two years ago.
Thank you for your answers and your work
Hello, in your
ft_netAB
id encoder defined inreIDmodel.py
you detach the id_embedding:In the
gen_update
loss function intrainer.py
you calculate the id embedding reconstruction loss (as specified in your paper) but you are using the detached embeddings:Because the embeddings are detached these losses are not constraining the model, did I miss something ?