gusye1234 / LightGCN-PyTorch

The PyTorch implementation of LightGCN
870 stars 229 forks source link

The implementation of BPR loss is different from that stated in the paper #4

Closed Dinxin closed 4 years ago

Dinxin commented 4 years ago

The code implementation of BPR loss:

def bpr_loss(self, users, pos, neg):
        (users_emb, pos_emb, neg_emb, 
        userEmb0,  posEmb0, negEmb0) = self.getEmbedding(users.long(), pos.long(), neg.long())
        reg_loss = (1/2)*(userEmb0.norm(2).pow(2) + 
                         posEmb0.norm(2).pow(2)  +
                         negEmb0.norm(2).pow(2))/float(len(users))
        pos_scores = torch.mul(users_emb, pos_emb)
        pos_scores = torch.sum(pos_scores, dim=1)
        neg_scores = torch.mul(users_emb, neg_emb)
        neg_scores = torch.sum(neg_scores, dim=1)

        loss = torch.mean(torch.nn.functional.softplus(neg_scores - pos_scores))

        return loss, reg_loss

The formula stated in the paper: image

gusye1234 commented 4 years ago

Do you have a question about this line

reg_loss = (1/2)*(userEmb0.norm(2).pow(2) + 
                         posEmb0.norm(2).pow(2)  +
                         negEmb0.norm(2).pow(2))/float(len(users))

with a 1/2 in front of the regularization term? I think that's just a non-important difference since we can always absorb the 1/2 into lambda.

lemonadeseason commented 3 years ago

I have a question about this line: loss = torch.mean(torch.nn.functional.softplus(neg_scores - pos_scores)), softplus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive. Is the implementation of loss consistent with your paper?