opendilab / LMDrive

[CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models
Apache License 2.0
680 stars 59 forks source link

Evaluation Scripts Clarification #7

Open ryhnhao opened 10 months ago

ryhnhao commented 10 months ago

Hi @deepcs233 Thanks for your great work.

Our team is investigating this topic and hope to re-implement the codes to generate the metric index as your paper claimed.

However, we have difficulties during evaluation. (Maybe we are unfamiliar with Carla server running)

Could you supplement more details in evaluation parts in readme.md It can't be better if your guidance could help us replicate the metric index claimed in your paper.

Thanks again.

deepcs233 commented 10 months ago

Hi! Thank you for your attention to this project. However, I have no idea how to provide more details about the evaluation section now. Can you show some problems or confusion during the reproduction?

yanfushan commented 10 months ago

Hi @deepcs233 Thanks for your great work. I am a current student. Based on your introduction to pre training in readme.md, I have found the model of the visual encoder (visual_encoder/tim/models/memfuser. py), but during model fine-tuning (LAVIS Bash run. sh), I couldn't find a py file like "visual_encoder/tim/models/memfuser. py", so I don't know how "model" was created. Where is it located in the LAVIS folder. Thanks again.

deepcs233 commented 10 months ago

Hi! You need to install the modified timm package (follow the setup section).

cd vision_encoder
pip3 install -r requirements.txt
python setup.py develop # if you have installed timm before, please uninstall it
yanfushan commented 10 months ago

@deepcs233 Thank you for your reply!

ryhnhao commented 10 months ago

Hi @deepcs233

I'm trying to run the evaluation through run_evaluation.sh CUDA_VISIBLE_DEVICES=0 ./leaderboard/scripts/run_evaluation.sh

There are two specific issues here:

  1. As shown in figure 1, it seems carla doesn't work well, how can i solve it. image

BTW, I have follow your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md

BTW, I notice the codes are based on CARLA 0.9.10.1, but the pip-installed carla version is 0.9.15, does it matter? Most users may directly pip install carla as your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md (We cannot pip install carla==0.9.10 or 0.9.10.1, which are not released versions), so if others don't run into the same question, it may not matter.

  1. _leaderboard/scripts/runevaluation.sh mentioned the checkpoint info is in results/lmdrive_result.json And _leaderboard/team_code/lmdriveconfig.py also mentioned the checkpoint info as follows llm_model = '/data/llava-v1.5-7b' preception_model = 'memfuser_baseline_e1d3_return_feature' preception_model_ckpt = 'sensor_pretrain.pth.tar.r50' lmdrive_ckpt = 'lmdrive_llava.pth' So how the structure of checkpoints are organized in results/lmdrive_result.json? could you supplemented a lmdrive_result.json example?

Thanks a lot.

Best regards.

deepcs233 commented 10 months ago

Hi!

  1. Can you show the command that starts your Carla server? The ports that are set in the Carla server and in the run_evaluation.sh need to be the same. The version is ok. 0.9.10.1 is the version of the carla application and 0.9.15 is the version of the carla API. The two versions don't need to be the same.
  2. results/lmdrive_result.json only records the evaluation results and details. When you finish the evaluation, the file will be generated.
ryhnhao commented 10 months ago

Hi!

  1. Can you show the command that starts your Carla server? The ports that are set in the Carla server and in the run_evaluation.sh need to be the same. The version is ok. 0.9.10.1 is the version of the carla application and 0.9.15 is the version of the carla API. The two versions don't need to be the same.
  2. results/lmdrive_result.json only records the evaluation results and details. When you finish the evaluation, the file will be generated.

Hi! Thanks for your in-time reply!

  1. the Carla server start command and leaderboard evaluator command are the same as the run_evaluation.sh

According to my understanding, Carla server start command in red box, and leaderboard evaluator command in green box. The ports in Carla server and leaderboard evaluator are the same.

image

  1. Ok, got it.

Thanks again.

deepcs233 commented 10 months ago

Hi! Could you try to sleep 10 seconds to wait for the start of your Carla server? and use nvidia-smi or the GUI to check whether the server is ready.

yanfushan commented 10 months ago

20240122-181038 Hello!Thank you for your excellent work.I have some questions when reading the dataset, which may be a relatively simple problem. However, I would like to ask for your advice. When reading the text "navigation_instruction_list. txt", I am not quite clear about the meanings expressed by strings such as "Follow-03-s1 dis", "Other-01", "Turn-04-S", "Turn-06-S-L", especially strings such as "s1", "dis", "Other", "s-L", "s", etc. Can you provide an answer?Thank you again.

MayDGT commented 4 months ago

Hi @deepcs233

I'm trying to run the evaluation through run_evaluation.sh CUDA_VISIBLE_DEVICES=0 ./leaderboard/scripts/run_evaluation.sh

There are two specific issues here:

  1. As shown in figure 1, it seems carla doesn't work well, how can i solve it. image

BTW, I have follow your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md

BTW, I notice the codes are based on CARLA 0.9.10.1, but the pip-installed carla version is 0.9.15, does it matter? Most users may directly pip install carla as your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md (We cannot pip install carla==0.9.10 or 0.9.10.1, which are not released versions), so if others don't run into the same question, it may not matter.

  1. _leaderboard/scripts/runevaluation.sh mentioned the checkpoint info is in results/lmdrive_result.json And _leaderboard/team_code/lmdriveconfig.py also mentioned the checkpoint info as follows llm_model = '/data/llava-v1.5-7b' preception_model = 'memfuser_baseline_e1d3_return_feature' preception_model_ckpt = 'sensor_pretrain.pth.tar.r50' lmdrive_ckpt = 'lmdrive_llava.pth' So how the structure of checkpoints are organized in results/lmdrive_result.json? could you supplemented a lmdrive_result.json example?

Thanks a lot.

Best regards.

Hi @deepcs233 @ryhnhao , I met the same problem when executing ./leaderboard/scripts/run_evaluation.sh for the evaluation:

How did you solve it?

MayDGT commented 4 months ago

Hi @deepcs233 I'm trying to run the evaluation through run_evaluation.sh CUDA_VISIBLE_DEVICES=0 ./leaderboard/scripts/run_evaluation.sh There are two specific issues here:

  1. As shown in figure 1, it seems carla doesn't work well, how can i solve it. image

BTW, I have follow your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md BTW, I notice the codes are based on CARLA 0.9.10.1, but the pip-installed carla version is 0.9.15, does it matter? Most users may directly pip install carla as your instructions in "Download and setup CARLA 0.9.10.1" part in readme.md (We cannot pip install carla==0.9.10 or 0.9.10.1, which are not released versions), so if others don't run into the same question, it may not matter.

  1. _leaderboard/scripts/runevaluation.sh mentioned the checkpoint info is in results/lmdrive_result.json And _leaderboard/team_code/lmdriveconfig.py also mentioned the checkpoint info as follows llm_model = '/data/llava-v1.5-7b' preception_model = 'memfuser_baseline_e1d3_return_feature' preception_model_ckpt = 'sensor_pretrain.pth.tar.r50' lmdrive_ckpt = 'lmdrive_llava.pth' So how the structure of checkpoints are organized in results/lmdrive_result.json? could you supplemented a lmdrive_result.json example?

Thanks a lot. Best regards.

Hi @deepcs233 @ryhnhao , I met the same problem when executing ./leaderboard/scripts/run_evaluation.sh for the evaluation:

  • UnboundLocalError: local variable 'leaderboard_evaluator' referenced before assignment
  • AttributeError: 'LeaderboardEvaluator' object has no attribute 'manager'

How did you solve it?

Update: In my case, the two errors occurred after another error:

Following this solution: https://github.com/salesforce/LAVIS/issues/571#issuecomment-1895376589, the original two errors disappeared and the evaluation process was successfully started.

CoderXuans commented 1 month ago

@yanfushan Do you know now?