Closed jingzhengli closed 3 years ago
However, the result is as follows, I did not reproduce the result. I don't know what is wrong. epoch: 45210, lr: 0.00056, lambda: 1.0 validation: 196.0, 0.2465
I switched to the pre-trained model, then the result is largely improved. Namely, the algorithm heavily relies on a Pre-trained model.
I am not sure whether the code is correct. It should be : F_loss = C_loss + Gregloss + lamb G_loss + lamb semantic_loss opt.zero_grad() D_loss = D_loss + Dregloss opt_D.zero_grad() F_loss.backward(retain_graph=True) D_loss.backward(retain_graph=True) opt.step() opt_D.step()
The original code is
opt_D.zero_grad()
Because when the opt is update, the opt_D has been modified. In my latest Pytorch Version, it occurs an error.