joeybose / Flexible-Fairness-Constraints

Code for ICML2019 Paper "Compositional Invariance Constraints for Graph Embeddings"
48 stars 18 forks source link

Question about the backward propagation of train_gcmc function in transD_movielens.py #6

Open 2016312357 opened 2 years ago

2016312357 commented 2 years ago

Hello, I have a question about the loss in function train_gcmc() in transD_movielens.py. Below is the code. Why are you accumulating the l_penalty_2 of all discriminators(the code which I made it bold)? In my opinion, each discriminator could be trained seperately with its own loss, and it has nothing to do with other discriminators.

for k in range(0,args.D_steps): l_penalty_2 = 0 for fairD_disc, fair_optim in zip(masked_fairD_set,\ masked_optimizer_fairD_set): if fairD_disc is not None and fair_optim is not None: fair_optim.zero_grad() l_penalty_2 += fairD_disc(filter_l_emb.detach(),\ p_batch[:,0],True) if not args.use_cross_entropy: fairD_loss = -1*(1 - l_penalty_2) else: fairD_loss = l_penalty_2 fairD_loss.backward(retain_graph=True) fair_optim.step()