Closed wuyuanmm closed 2 years ago
Hi @wuyuanmm,
Thank you for your interest in our paper and for sharing your results with us. Note our model is a pure temporal model, and we do not use any image or ego-motion features for JAAD. In this particular example, although the ego-vehicle remains still in the present frame, it will move forward in the next few frames (As a result, the target future trajectory is shorter or in the opposite direction of the person's movement.). As we mentioned in our paper, It is difficult to predict the correct trajectory. However, if no ego-motion is involved, our model generate a better result (see image).
Thank you very much for your reply! but I also want to know if SGnet can be trained with the information of ego-motion.If so, what should I do. Thanks, I am looking forward to your reply.
We haven't tried it on JAAD yet, but we believe it can. You can check out this paper: Unsupervised Traffic Accident Detection in First-Person Videos (github: https://github.com/MoonBlvd/tad-IROS2019). They generated the ego-motion with ORBSLAM2, and used that as additional input to the decoder. In fact, we trained SGNet on HEVI with ego-motion.
Thank you very much. I get it. I will try.
Thank you so much for releasing this awesome project.
I have done the training of the determined trajectories using the modified script. 200 epochs, MSE_05: 85.124462; MSE_10: 333.395368; MSE_15: 1057.618436. Then I implemented the visualization.
I found that the predicted results seem to have a large error, and I want to know if I am wrong somewhere. Thanks!
code is here test_vis.zip