gusye1234 / LightGCN-PyTorch

The PyTorch implementation of LightGCN
870 stars 229 forks source link

The L2 regularization #39

Open gzy02 opened 1 year ago

gzy02 commented 1 year ago

During the training of the model using the mini-batch approach, the L2 regularization term does not involve all model parameters, but only uses the part of the model parameters corresponding to the involved embeddings. Is this a deliberate trick in the experiment?

在用mini-batch方式训练模型时,L2正则化项的计算并非使用了全部模型参数,而是只用了这一批次涉及到的用户、物品嵌入对应的那一部分模型参数。请问这是实验里有意为之的trick吗?

    def bpr_loss(self, users, pos, neg):
        (users_emb, pos_emb, neg_emb, 
        userEmb0,  posEmb0, negEmb0) = self.getEmbedding(users.long(), pos.long(), neg.long())
        reg_loss = (1/2)*(userEmb0.norm(2).pow(2) + 
                         posEmb0.norm(2).pow(2)  +
                         negEmb0.norm(2).pow(2))/float(len(users))
        pos_scores = torch.mul(users_emb, pos_emb)
        pos_scores = torch.sum(pos_scores, dim=1)
        neg_scores = torch.mul(users_emb, neg_emb)
        neg_scores = torch.sum(neg_scores, dim=1)

        loss = torch.mean(torch.nn.functional.softplus(neg_scores - pos_scores))

        return loss, reg_loss
kashif-flask commented 1 year ago

since in LightGCN there are no feature transformation matrix, only learnable parameters are user and item emebddings in 0th layer that's why they used them in regularization