Closed nehSgnaiL closed 1 year ago
@WenMellors
Here is the current leaderboard.
Model | Recall@5 | MRR@5 | NDCG@5 |
---|---|---|---|
FPMC | 0.0915 | 0.0533 | 0.0627 |
RNN | 0.2635 | 0.1688 | 0.1923 |
ST-RNN | 0.0022 | 0.0016 | 0.0017 |
SERM | 0.3162 | 0.2004 | 0.2293 |
DeepMove | 0.3771 | 0.2559 | 0.2861 |
HST-LSTM | 0.302 | 0.2023 | 0.2272 |
LSTPM | 0.3778 | 0.2578 | 0.2877 |
For the rest models (ATST-LSTM, GeoSAN, STAN, CARA), they use negative samples to train and predict. That is, they don't predict the probability of all POIs, but predict the probability of a small set of negtive samples and the positive sample (true next location). Thus, it is unfair to compare them with above models, and the negative sampling strategy is not open source and hard to reproduce for these models. So I do not put these models in the leaderboard.
@NickHan-cs Can you publish this leaderboard in the homepage?
Thanks @WenMellors ! 😄
Btw, it seems that ST-RNN in the leaderboard performed insufficiently. Does the design of ST-RNN limit its performance on 4SQ-TKY dataset?
For the ST-RNN, we are not sure if the problem is caused by our reproduction work. We mainly refer to open source code STRNN and the original paper to reproduce the model. But as the experiment result shown in the STRNN, the reproduce result is still worse.
Got it. Thanks again for your comments and your work.
Hi, thanks for the fantastic work.
We can access the performances and rankings of models via the menu item
Ranking
from the home page of LibCity, however, it seems that only information on the Traffic State Prediction task is available. Could the performance leaderboard of the Trajectory Next-Location task be provided?It would be helpful to have some information about this in the docs somewhere. 🥂