Gutianpei / MID

[CVPR2022] Code for CVPR 2022 paper "Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion"
182 stars 30 forks source link

Reproducing failed #26

Open WannaBSteve opened 11 months ago

WannaBSteve commented 11 months ago

Hi, I really appriciate your fantastic work and releasing the source code., but by follwing the training instruction on readme(using default config which is baseline.yaml), I found it hard to reproduce the result in the paper. For example, the ADE of my result is nearly 0.9(through both RTX A4000 and RTX 3090) but the result shown in the paper is merely 0.22.

Could you please shed some light on it? And I will be really appriciated if you could provide some pretrained models.

Gutianpei commented 11 months ago

Hi, I really appriciate your fantastic work and releasing the source code., but by follwing the training instruction on readme(using default config which is baseline.yaml), I found it hard to reproduce the result in the paper. For example, the ADE of my result is nearly 0.9(through both RTX A4000 and RTX 3090) but the result shown in the paper is merely 0.22.

Could you please shed some light on it? And I will be really appriciated if you could provide some pretrained models.

Hello,

0.9 seems not right, there might be a bug or incorrect config in your code. The original pretrained models are lost since server changes etc but I can re-train a new one when I have time. Thanks for the comments.

WannaBSteve commented 11 months ago

Hi, I really appriciate your fantastic work and releasing the source code., but by follwing the training instruction on readme(using default config which is baseline.yaml), I found it hard to reproduce the result in the paper. For example, the ADE of my result is nearly 0.9(through both RTX A4000 and RTX 3090) but the result shown in the paper is merely 0.22. Could you please shed some light on it? And I will be really appriciated if you could provide some pretrained models.

Hello,

0.9 seems not right, there might be a bug or incorrect config in your code. The original pretrained models are lost since server changes etc but I can re-train a new one when I have time. Thanks for the comments.

Thank you for replying.

I didn't modify a bit on the config file. Now I'm looking forward to getting the model, that would really help. Thanks again.

JunningSu commented 11 months ago

Hi, I really appriciate your fantastic work and releasing the source code., but by follwing the training instruction on readme(using default config which is baseline.yaml), I found it hard to reproduce the result in the paper. For example, the ADE of my result is nearly 0.9(through both RTX A4000 and RTX 3090) but the result shown in the paper is merely 0.22.

Could you please shed some light on it? And I will be really appriciated if you could provide some pretrained models.

I also failed when I tried to reproduce on the univ dataset, and the value was similar to what you said (0.9/1.1). Have you solved the problem?

VanHelen commented 9 months ago

Hello, I also obtained similar results under baseline.yaml. Have you solve this problem?

mh-kav-institute commented 3 months ago

Another push from myself:

I also obtain similar results as described above for all five ETH-subtests when trying to train the models. Could you guys please recheck your submitted code and configs or upload your original pre-trained models? It's such a great work, but impossible to use for further comparison or investigation if it's not fixed.

Thank you.