Closed Dinxin closed 4 years ago
Do you have a question about this line
reg_loss = (1/2)*(userEmb0.norm(2).pow(2) +
posEmb0.norm(2).pow(2) +
negEmb0.norm(2).pow(2))/float(len(users))
with a 1/2 in front of the regularization term? I think that's just a non-important difference since we can always absorb the 1/2 into lambda.
I have a question about this line: loss = torch.mean(torch.nn.functional.softplus(neg_scores - pos_scores)), softplus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive. Is the implementation of loss consistent with your paper?
The code implementation of BPR loss:
The formula stated in the paper: