sshaoshuai / MTR

MTR: Motion Transformer with Global Intention Localization and Local Movement Refinement, NeurIPS 2022.
Apache License 2.0
666 stars 106 forks source link

How to reproduce MTR-e2e results? #38

Closed qiyan98 closed 1 year ago

qiyan98 commented 1 year ago

Hello,

Thank you for sharing your code. I would like to understand how to replicate the results of the MTR-e2e model. I have noticed that there is no available configuration in the repository, and I am finding it difficult to comprehend the precise method for selecting the "positive mixture component."

MTR-e2e for end-to-end motion prediction. We also propose an end-to-end variant of MTR, called MTR-e2e, where only 6 motion query pairs are adopted so as to remove NMS post processing. In the training process, instead of using static intention points for target assignment as in MTR, MTR-e2e selects positive mixture component by calculating the distances between its 6 predicted trajectories and the GT trajectory, since 6 intention points are too sparse to well cover all potential future motions.

Could you provide further clarification on the definition of the "positive mixture component" and explain how to reproduce the results of the end-to-end (e2e) approach? Thank you.

sshaoshuai commented 1 year ago

The positive mixture component indicates a single predicted trajectory out of 6 predictions from 6 queries, and this positive predicted trajectory has the smallest endpoint’s distance with the GT trajectory. During training process, this positive predicted trajectory will be optimized toward the GT trajectory.

For now, I still do not have plan to spend time on preparing the code release for MTR-e2e. But I think it should be easy to achieve similar performance based on the codebase of MTR.