yuh-yang / KGCL-SIGIR22

[SIGIR'22] Knowledge Graph Contrastive Learning for Recommendation
https://arxiv.org/abs/2205.00976
MIT License
103 stars 26 forks source link

loss formulations #15

Closed lihuiliullh closed 1 year ago

lihuiliullh commented 1 year ago

May I know which loss formulations in the paper these two images correspond to?

image

image

yuh-yang commented 1 year ago

Hi, @lihuiliullh !

reg_loss computes the L2 regularization for batch user and item embeddings.

loss corresponds to the BPR loss for recommendation task.

lihuiliullh commented 1 year ago

@yuh-yang May I know why you use L2 regularization for batch user and item embeddings? Does this trick can improve performance?

lihuiliullh commented 1 year ago

I also notice that for BPR loss, the formulation in the paper is image

But in your code, it is softplus(-(pos_scores - neg_scores)).

Are these two the same?

yuh-yang commented 1 year ago
  1. Generally L2 regularization is effective against overfitting. This batch-wise usage follows NGCF and LightGCN.

  2. Using softplus instead of logsigmoid in BPR loss is a common practice to avoid NaN loss when, on some cases during training, the model fails to perform well by scoring negative samples very high.

Referring to this: https://github.com/xiangwang1223/neural_graph_collaborative_filtering/issues/17

yuh-yang commented 1 year ago

Closed for being inactive.