Closed uxian closed 11 years ago
I heard of LTR but I'm not in this area. To be frank, RPR is really a technically weak stuff, I can detail you some insights offline... Once you hear of it, you will soon recognize that it is just the simplest Neural Network incidentally reinvented in another way... This part is not included in the report. I found this relation after the course project. That may be partly why it works.
As to the categorization, I think you mentioned three evaluation criteria. The constraint of RPR may be categorized into the "pairwise" one.
The top-k matter you mentioned is a good point. It is not captured by the Kendall's tau evaluation and also not reflected in the (transformed) objective function. One way to do it is allow weighted edge on the preference graph. Say, we have A, B, C different tags and relation: A > B > C. In the current objective, ranking (B, C, A) and ranking (C, A, B) are equally good, i.e. 2 reversed order pairs. If we look closer, the difference is that: 1st one reversed (B, A) and 2nd one reversed (C, B). The intuition is that reversing (B, A) may be worse than the other one. I feel weighted edge can capture some properties of the list wise diminishing importance (meeting criteria like MAP, DCG, NDCG, etc). That is a good extension to the current framework.
I see. I heard that Neural Network seems not very practical, so I have not read about it, I will pick it up. 3Q~
I think NN's theoretical guarantee is not that good, so people like to criticize it if you use it directly. On the other hand, making it and optimization problem and use 1st order optimization methods looks more elegant (like GD in the report). Just after the project, I realize that the implementation of SGD on RPR is equivalent to NN from the machine's point of view.
SGD is one way to tackle with RPR after some transformations. One may find other approaches to solve the RPR, making it different from NN.
Come to close this thread with a few notes:
@hupili Can I ask some questions for your RPR algorithm? I consider it as my exercise to learn ML...
You must heard of some learning to rank(L2R) algorithms, they can be categorized into pointwise, pairwise and listwise way. Is there any correlations between L2R and RPR? Can RPR be categorized into pairewise approach?
People care more about top-k messages, wrong order in top-k is more fatal than that in middle-k, but kendall's tau treat top-k the same as middle-k, is this a weakness? Seems not a fatal one, I just thought about it :D
Thanks!