Closed mindojune closed 3 years ago
As a follow-up to my original question, I found out that the inputs to the losses are (scores, relevance scores) not (scores, rank). This sounds like a reason I was having the above observation, since relevance and rank would be inversely related. This brings me to another question, my data doesn't have relevance labels but just rank labels; what should be a good way to turn rank labels to relevance scores? There are several options off the top of my head: reversing the rank, 1/rank, etc, but I'd appreciate your opinion on this. Thank you!
Thank you for this question and kind words. I will give it some thought and get back to you in the coming days.
Do you know what was the ranking function that your data was generated with?
If there are no indicators of the relevance other than the ordering, there are no ties (two items with the same relevance) and there is no noise (so, for example, your data wasn't generated with implicit user feedback), I think you could use a loss function that doesn't weigh examples according to their relevance, as the relevance is unknown. The RankNet loss function (implemented as a weighing scheme for the LambdaLoss loss function) and assuming relevance(x) = 1/rank(x) could work here.
However, we haven't experimented with non-integer labels yet.
Thank you for your response! For my case, the only indication of the relevance is the observed ordering itself. As for losses that do not weigh examples according to their relevance, does listMLE belong to this class? My understanding was that listMLE only considers the order of the ground truth relevance labels to compute the negative log likelihood for the gt order, but I've been having problems with making the model actually learn. Thank you again for your work and help!
Yes, ListMLE and ListNet should work here, as they also work on the ordering and not on the relevance differences. In our experiments, we noticed that ListMLE doesn't pair very well with our neural models so from the two, I'd recommend trying ListNet first.
Closing the issue. If there are any further questions, please reopen.
First, let me thank you for this nice library! It's been of great help for my project.
My question is about how to infer the most likely rank of items given the model scores trained on ListMLE. My understanding was that ordering the items by the scores (descending) would give you the most likely score, according to the PL model. However, while toying with the model, I found out that actually ordering the scores by ascending order would give my model sensible results while ordering by descending doesn't. Is this intended or does it sound like I'm doing something wrong?
Thank you!