Open adebiasio-estilos opened 1 year ago
Moreover, taking for example in consideration BPR's current implementation:
how can I print at each forward pass the various loss variables?
Thanks for implementing this! I've seen the paper you mentioned roughly, and I'll assume you're using TensorFlow1.
It seems that the main difference is they add a weight(w_l) parameter in the original BPR loss. So you can add a hyperparameter such as importance_sampling
to let users choose whether to use it. Then you will implement the weight(w_l).
At last, in bpr.py, the code may look like this:
if self.importance_sampling:
self.bpr_loss = w * tf.log_sigmoid(item_diff)
else:
self.bpr_loss = tf.log_sigmoid(item_diff)
What do you mean by various loss variables? Training loss or model's variables?
Thank you for your help @massquantity ! The methodology you propose looks good!
What do you mean by various loss variables? Training loss or model's variables?
Yes, I mean the TF model's variables during the graph execution. How are you used to debug the TF1 code?
For example, If I want to get the self.bpr_loss
, run sess.run
in the training function, which can get the variable and its shape.
for data in data_generator(shuffle, self.batch_size):
self.sess.run(
self.training_op,
feed_dict={
self.model.user_indices: data[0],
self.model.item_indices_pos: data[1],
self.model.item_indices_neg: data[2],
},
)
feed_dict = {
self.model.user_indices: data[0],
self.model.item_indices_pos: data[1],
self.model.item_indices_neg: data[2],
}
print(self.sess.run(self.model.bpr_loss, feed_dict=feed_dict).shape)
print(self.sess.run(self.model.bpr_loss, feed_dict=feed_dict))
If you can't get a variable from self.model
, just add a self.
before the variable you want to get in your implementation.
Thank you very much, it works!!!
Hi All, I would like to implement in LibRecommender BPR with Importance Sampling.
Thanks in advance Alvise