maciejkula / spotlight

Deep recommender models using PyTorch.
MIT License
2.97k stars 421 forks source link

Is dot product the right way to predict? #157

Open JoaoLages opened 5 years ago

JoaoLages commented 5 years ago

While training implicit sequence models, we use losses like hinge, bpr and pointwise. These losses don't maximize directly the dot product, so why do we use it while predicting?

maciejkula commented 5 years ago

These losses maximize the difference between the dot products of the positive and (implicit) negative items, and so using the dot product for prediction is appropriate.

nilansaha commented 3 years ago

@JoaoLages A bit late to the party but what we are really optimizing here are the embeddings for user or items whatever. The dot product is a mere operation to combine the two embeddings into one result. The backprop basically goes through the dot product and changes the embeddings in such a way that we get the results we want i.e. maximize the end result for positive items and minimize the result for the negative items. Correct me if I am wrong @maciejkula