PJLab-ADG / DiLu

[ICLR 2024] DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models
https://pjlab-adg.github.io/DiLu/
Apache License 2.0
186 stars 14 forks source link

Reproduce results #4

Closed Xin-Ye-1 closed 2 months ago

Xin-Ye-1 commented 5 months ago

Thank you for sharing the code! Can you also specify how to reproduce the results reported in the paper? For example, what 10 random seeds are used? I also note that the target speeds specified in the "run_dilu.py" are different from what described in the paper, can you clarify it? Is the environment setting defined in "run_dilu.py" also used for "grad" training?

zijinoier commented 4 months ago

Thank you for your interest and for reaching out with your questions!

  1. To reproduce the results reported in our paper using the DiLu framework, you only need to modify two settings in the config.yaml file: set episodes_num to 10 and simulation_duration to 30. This configuration should allow you to replicate the experiments.

  2. Regarding the target_speeds specified in the run_dilu.py script, it's understandable there might be some confusion. These values actually represent the list of speeds that the surrounding vehicles are programmed to track, rather than the target speed of ego vehicle. You can refer: https://highway-env.farama.org/actions/#highway_env.envs.common.action.DiscreteMetaAction

  3. For training with grad, we recommend using the official training code available at grad's GitHub repository.

I hope this clarifies your queries. Should you have any further questions or need additional information, please feel free to ask!

Xin-Ye-1 commented 4 months ago

Thank you for the clarification. I believe the target_speeds also controls the ego's speed https://github.com/Farama-Foundation/HighwayEnv/blob/master/highway_env/envs/common/action.py#L257 Anyway, since the config in the repo is different from the paper and also different from the grad repo https://github.com/zerongxi/graph-sdc/blob/main/config/graph.yaml#L24, can you clarify which config you used for testing? Thank you!

zijinoier commented 4 months ago

Thank you for your diligent follow-up. Upon reviewing my code and the discrepancies you've highlighted, I would like to clarify that for all experiments comparing DiLu with GRAD, GRAD configurations use the following speeds setting: target_speeds: np.linspace(10, 32, 5). And DiLu's config is not modified, still np.linspace(5, 32, 9). This should ensure consistency across our comparisons and help in reproducing the results. Thank you again for bringing this to my attention.

github-actions[bot] commented 2 months ago

Stale issue message, no activity