Closed lihuiliullh closed 1 year ago
Hi, @lihuiliullh !
reg_loss
computes the L2 regularization for batch user and item embeddings.
loss
corresponds to the BPR loss for recommendation task.
@yuh-yang May I know why you use L2 regularization for batch user and item embeddings? Does this trick can improve performance?
I also notice that for BPR loss, the formulation in the paper is
But in your code, it is softplus(-(pos_scores - neg_scores)).
Are these two the same?
Generally L2 regularization is effective against overfitting. This batch-wise usage follows NGCF and LightGCN.
Using softplus
instead of logsigmoid
in BPR loss is a common practice to avoid NaN loss when, on some cases during training, the model fails to perform well by scoring negative samples very high.
Referring to this: https://github.com/xiangwang1223/neural_graph_collaborative_filtering/issues/17
Closed for being inactive.
May I know which loss formulations in the paper these two images correspond to?