Closed Jhy1993 closed 4 years ago
Hi, Thanks for your interest. According to my experience, please try to make the learning rate and l2_normalization smaller, say 10e-5. Thanks.
Hi, I learn from my peers. Maybe you can try the following code to modify the BPR loss, in order to avoid NAN:
loss = tf.reduce_sum(tf.nn.softplus(-(pos_result - neg_result)))
Please let me know whether it works. Thanks.
It works! Thanks for your advice.
Hi Thanks for your brilliant code. However, I run your code several times and always meet "loss is NAN". So how can I avoid this phenomenon?