HKUST-Aerial-Robotics / SIMPL

SIMPL: A Simple and Efficient Multi-agent Motion Prediction Baseline for Autonomous Driving
MIT License
193 stars 22 forks source link

How to implement dynamic visualization? #12

Open yangyh408 opened 4 months ago

yangyh408 commented 4 months ago

Hi,

Congratulations for your excellent work. I am trying to use the pre-trained model you provided for validation and visualization, and i met serveral problems listed below:

  1. Currently visualize.py only provides a static display solution, is there a dynamic visualization solution like the one shown in YouTube?

  2. I noticed that some pre-trained models are provided in saved_models, what is the difference between simpl_av1_bezier_ckpt.tar and simpl_av1_ckpt.tar? How long did they train for respectively?

Thanks in advance!

yangyh408 commented 4 months ago

As demonstrated below image

MasterIzumi commented 4 months ago

@yangyh408 Thanks for your comments & hope this project helps you.

As for your questions: 1) we only release the code for single-frame visualization. For the visualization results on Argoverse tracking dataset, currently, we have no plan to release it as it is a little bit messy. However, I can introduce the procedure here. We first convert the Tracking dataset into the form of Forecasting dataset. Then, for each frame, we use a dataloader to prepare the input of the network and then run the forward pass to get the prediction results for all surrounding agents. Finally, we stitch all the frames to get the video.

2) simpl_av1_bezier_ckpt.tar is for Bezier-based trajectory output. If you want to try this, you can revise the model_path in the scripts (e.g., scripts/simpl_av1_vis.sh) and modify the net_cfg["param_out"] to bezier in config/simpl_cfg.py.

MasterIzumi commented 4 months ago

As for the training time. For Argoverse 1, we use 8 3090 GPUs with batch size = 16 (equivalent to 128) for 50 epochs. It will take around 11 hours. No big difference in the training time for the Bezier output.

penglo commented 3 months ago

Hi,

Congratulations for your excellent work. I am trying to use the pre-trained model you provided for validation and visualization, and i met serveral problems listed below:

1. Currently visualize.py only provides a static display solution, is there a dynamic visualization solution like the one shown in YouTube?

2. I noticed that some pre-trained models are provided in saved_models, what is the difference between simpl_av1_bezier_ckpt.tar and simpl_av1_ckpt.tar? How long did they train for respectively?

Thanks in advance!

Hi,

Congratulations for your excellent work. I am trying to use the pre-trained model you provided for validation and visualization, and i met serveral problems listed below:

1. Currently visualize.py only provides a static display solution, is there a dynamic visualization solution like the one shown in YouTube?

2. I noticed that some pre-trained models are provided in saved_models, what is the difference between simpl_av1_bezier_ckpt.tar and simpl_av1_ckpt.tar? How long did they train for respectively?

Thanks in advance!

Hi,

Congratulations for your excellent work. I am trying to use the pre-trained model you provided for validation and visualization, and i met serveral problems listed below:

1. Currently visualize.py only provides a static display solution, is there a dynamic visualization solution like the one shown in YouTube?

2. I noticed that some pre-trained models are provided in saved_models, what is the difference between simpl_av1_bezier_ckpt.tar and simpl_av1_ckpt.tar? How long did they train for respectively?

Thanks in advance! Hello, I encountered some issues with visualization due to modifications I made to the model. I'm wondering if you would be available to discuss these visualization issues with me? If so, my email is lipl23@mails.jlu.edu.cn.

你好,我在可视化中遇到了部分问题,因为我对模型进行了修改,所有遇到问题,不知道您是否方便我们讨论一下可视化方面的问题,如果可以的话,我的邮箱是lipl23@ mails.jlu.edu.cn