Closed shlee-home closed 1 year ago
Hi, thanks for your interest in our work. The labels are set as torch.zeros(logits.shape[0])
because the function F.cross_entropy
takes as input in the second argument, the label indices. We let the 0-th index be the positive label, thus the use of torch.zeros
. You can refer to the function documentation here.
Hello, again. I'm studying your paper and code. However, in following codes in your 'cost.py' file,
I think that one of the instances (N) is 1.. because when crossentropy is calculated, the positive one's label become 1. I don't know well, so I want your advice. Thank you.