Thank you for your wonderful work.
I a little confused about the code:
` for previous_net in previous_nets:
previousnet.cuda()
, pro3, _ = previous_net(x)
nega = cos(pro1, pro3)
logits = torch.cat((logits, nega.reshape(-1,1)), dim=1)
previous_net.to('cpu')`
First, In paper, the negetive representation seem from the local model of last round, instead of all previous rounds. I want to know if the code here makes sense. Second, "logits = torch.cat((logits, nega.reshape(-1,1)), dim=1)", what is the purpose of splicing? Looking forward to your reply, Thank you!!!
previous_nets is a list with only size 1, which only includes the local model of last round by default.
This is to easily compute the contrastive loss by cross-entropy loss. By splicing the positive and negative pairs, the contrastive loss can be computed by using cross-entropy loss in line 311.
Thank you for your wonderful work. I a little confused about the code:
` for previous_net in previous_nets: previousnet.cuda() , pro3, _ = previous_net(x) nega = cos(pro1, pro3) logits = torch.cat((logits, nega.reshape(-1,1)), dim=1)
First, In paper, the negetive representation seem from the local model of last round, instead of all previous rounds. I want to know if the code here makes sense. Second, "logits = torch.cat((logits, nega.reshape(-1,1)), dim=1)", what is the purpose of splicing? Looking forward to your reply, Thank you!!!