Open asjad18 opened 4 years ago
just see the paper , the cycle consistentency loss has consist the loss of the two generator , the purpose of the loss it's to make the generator perform better at the same time, so it's right to optimize both generator
optimizer_G = torch.optim.Adam( itertools.chain(G_AB.parameters(), G_BA.parameters()), lr=opt.lr, betas=(opt.b1, opt.b2) )
I am unable to understand this. Why shouldn't we use different optimizer for both generators ?