Hello, I have a question about the loss in function train_gcmc() in transD_movielens.py. Below is the code. Why are you accumulating the l_penalty_2 of all discriminators(the code which I made it bold)? In my opinion, each discriminator could be trained seperately with its own loss, and it has nothing to do with other discriminators.
for k in range(0,args.D_steps):
l_penalty_2 = 0
for fairD_disc, fair_optim in zip(masked_fairD_set,\
masked_optimizer_fairD_set):
if fairD_disc is not None and fair_optim is not None:
fair_optim.zero_grad()
l_penalty_2 += fairD_disc(filter_l_emb.detach(),\
p_batch[:,0],True)
if not args.use_cross_entropy:
fairD_loss = -1*(1 - l_penalty_2)
else:
fairD_loss = l_penalty_2
fairD_loss.backward(retain_graph=True)
fair_optim.step()
Hello, I have a question about the loss in function train_gcmc() in transD_movielens.py. Below is the code. Why are you accumulating the l_penalty_2 of all discriminators(the code which I made it bold)? In my opinion, each discriminator could be trained seperately with its own loss, and it has nothing to do with other discriminators.
for k in range(0,args.D_steps): l_penalty_2 = 0 for fairD_disc, fair_optim in zip(masked_fairD_set,\ masked_optimizer_fairD_set): if fairD_disc is not None and fair_optim is not None: fair_optim.zero_grad() l_penalty_2 += fairD_disc(filter_l_emb.detach(),\ p_batch[:,0],True) if not args.use_cross_entropy: fairD_loss = -1*(1 - l_penalty_2) else: fairD_loss = l_penalty_2 fairD_loss.backward(retain_graph=True) fair_optim.step()