xiangwang1223 / neural_graph_collaborative_filtering

Neural Graph Collaborative Filtering, SIGIR2019
MIT License
781 stars 261 forks source link

About "reg_loss" and "_split_A_hat" function #11

Open yuanyuansiyuan opened 5 years ago

yuanyuansiyuan commented 5 years ago
  1. reg_loss = tf.constant(0.0, tf.float32, [1]) why you add a 0 constant in the loss equation?
  2. why do you split the A_hat into fold parts? for the memory efficiency or other reasons?
  3. Have you tried larger negative samples for each observations?
  4. Have you tried the binary cross entropy loss function like in the previous NCF work?

Thank you!

xiangwang1223 commented 5 years ago

Hi,

  1. To monitor the training phase, you can replace it with any target variable.
  2. Yes.
  3. No. For a fair comparison, all baselines and NGCF pair one positive sample with one negative sample. You can try it yourself.
  4. No. We treat the top-n recommendation as a ranking task (i.e., we care more about the relative order s); whereas, the cross-entropy loss care more about predictive value. You can try it yourself.

Thanks!

bbjy commented 4 years ago

Hi @xiangwang1223 , thank you for your work. Why do you set reg_loss as a constant to monitor the training phase. What's the aim or necessity to monitor? Why don't you set the reg_loss as the regularization loss of trainable parameters W, which doesn't seem to be in the released code? Thank you!