Thanks for supporting open source research and for sharing your code with the community! Having a reference implementation is a huge benefit for reproducibility and I really appreciated being able to poke around the inside of LaneGCN.
However, while running the evaluation code in test.py I noticed that the computed miss rate was suspiciously low, although reported FDE/ADE values seemed to be in the right ballpark. These incongruent metrics seem to stem from a faulty configuration of the miss_threshold to 20m, instead of the Argoverse standard of 2m.
After changing the threshold, the computed miss rate of the pre-trained model is 0.16, which seems in line with expectaitons.
Created a PR to save folks some confusion in the future, but please let me know if I'm missing anything!
Thanks for supporting open source research and for sharing your code with the community! Having a reference implementation is a huge benefit for reproducibility and I really appreciated being able to poke around the inside of LaneGCN.
However, while running the evaluation code in
test.py
I noticed that the computed miss rate was suspiciously low, although reported FDE/ADE values seemed to be in the right ballpark. These incongruent metrics seem to stem from a faulty configuration of themiss_threshold
to 20m, instead of the Argoverse standard of 2m.After changing the threshold, the computed miss rate of the pre-trained model is 0.16, which seems in line with expectaitons.
Created a PR to save folks some confusion in the future, but please let me know if I'm missing anything!