EasonApolo / mstn

PyTorch reimplementation of Moving Semantic Transfer Network
35 stars 10 forks source link

question about the alternative update between opt and opt_D #6

Closed jingzhengli closed 3 years ago

jingzhengli commented 3 years ago

I am not sure whether the code is correct. It should be : F_loss = C_loss + Gregloss + lamb G_loss + lamb semantic_loss opt.zero_grad() D_loss = D_loss + Dregloss opt_D.zero_grad() F_loss.backward(retain_graph=True) D_loss.backward(retain_graph=True) opt.step() opt_D.step()

The original code is

opt_D.zero_grad()

    # D_loss.backward(retain_graph=True)
    # opt_D.step()
    # opt.zero_grad()
    # F_loss.backward(retain_graph=True)
    # opt.step()

Because when the opt is update, the opt_D has been modified. In my latest Pytorch Version, it occurs an error.

jingzhengli commented 3 years ago

However, the result is as follows, I did not reproduce the result. I don't know what is wrong. epoch: 45210, lr: 0.00056, lambda: 1.0 validation: 196.0, 0.2465

jingzhengli commented 3 years ago

I switched to the pre-trained model, then the result is largely improved. Namely, the algorithm heavily relies on a Pre-trained model.