opendilab / LMDrive

[CVPR 2024] LMDrive: Closed-Loop End-to-End Driving with Large Language Models
Apache License 2.0
526 stars 48 forks source link

Questions about evaluation. #13

Open weimengchuan opened 5 months ago

weimengchuan commented 5 months ago

Hi! I have 2 questions about evaluation. I'd be grateful for you help.

  1. Where to set directory of dataset before evaluation?
  2. When running to the following code, the pygame window crash and the program is stuck.
    # leaderboard/team_code/lmdriver_agent.py: DisplayInterface.__init__
        self._display = pygame.display.set_mode(
            (self._width, self._height), pygame.HWSURFACE | pygame.DOUBLEBUF
        )

I tried to execute the following code separately, the pygame window continues to open.

import pygame
pygame.init()
pygame.font.init()
pygame.display.set_mode((1200, 900), pygame.HWSURFACE | pygame.DOUBLEBUF)

Thanks in advance!

deepcs233 commented 5 months ago

Hi!

  1. You can put the dataset anywhere you can access it. When you use the dataset to pretrain or finetune the model, the dataset path should be set to your dataset path.
  2. Could you provide more details or the log? BTW, the two code blocks look similar? I can't understand the following context:
# leaderboard/team_code/lmdriver_agent.py: DisplayInterface.__init__
weimengchuan commented 5 months ago

Hi!

  1. The input of the model during evaluation does not come from the dataset you provided, but from the simulator. Is that right?
  2. Sorry for not describing it clearly. Here is the detailed information about the problem I encountered when evaluation. Are there anything wrong with my operating procedures?
    • My Command: First, run calar in one console.
      bash carla/CarlaUE4.sh --world-port=2000

      Second, run evaluation in another console.

      
      export PT=2000

export CARLA_ROOT=carla export CARLA_SERVER=${CARLA_ROOT}/CarlaUE4.sh export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla export PYTHONPATH=$PYTHONPATH:$CARLA_ROOT/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg export PYTHONPATH=$PYTHONPATH:leaderboard export PYTHONPATH=$PYTHONPATH:leaderboard/team_code export PYTHONPATH=$PYTHONPATH:scenario_runner

export LEADERBOARD_ROOT=leaderboard export CHALLENGE_TRACK_CODENAME=SENSORS export PORT=$PT # same as the carla server port export TM_PORT=$(($PT+500)) # port for traffic manager, required when spawning multiple servers/clients export DEBUG_CHALLENGE=0 export REPETITIONS=1 # multiple evaluation runs export ROUTES=langauto/benchmark_long.xml export TEAM_AGENT=leaderboard/team_code/lmdriver_agent.py # agent export TEAM_CONFIG=leaderboard/team_code/lmdriver_config.py # model checkpoint, not required for expert export CHECKPOINT_ENDPOINT=results/lmdrive_result.json # results file

export SCENARIOS=leaderboard/data/scenarios/no_scenarios.json #town05_all_scenarios.json

export SCENARIOS=leaderboard/data/official/all_towns_traffic_scenarios_public.json export SAVE_PATH=data/eval # path for saving episodes while evaluating export RESUME=False

echo ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py python3 -u ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py \ --scenarios=${SCENARIOS} \ --routes=${ROUTES} \ --repetitions=${REPETITIONS} \ --track=${CHALLENGE_TRACK_CODENAME} \ --checkpoint=${CHECKPOINT_ENDPOINT} \ --agent=${TEAM_AGENT} \ --agent-config=${TEAM_CONFIG} \ --debug=${DEBUG_CHALLENGE} \ --record=${RECORD_PATH} \ --resume=${RESUME} \ --port=${PORT} \ --trafficManagerPort=${TM_PORT}


- Program is stuck, and the information is shown below.

leaderboard/leaderboard/leaderboard_evaluator.py:24: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html import pkg_resources bc1 localhost 2000 bc2 bc3 leaderboard/leaderboard/leaderboard_evaluator.py:95: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if LooseVersion(dist.version) < LooseVersion('0.9.10'): bc4 /root/anaconda3/envs/lmdrive/lib/python3.8/site-packages/diffusers/models/cross_attention.py:30: FutureWarning: Importing from cross_attention is deprecated. Please import from diffusers.models.attention_processor instead. deprecate( pygame 2.5.2 (SDL 2.28.2, Python 3.8.13) Hello from the pygame community. https://www.pygame.org/contribute.html bc5 bc

========= Preparing RouteScenario_3 (repetition 0) =========

Setting up the agent

deepcs233 commented 5 months ago

Hi!

  1. Yes. Our pipeline is based on the closed-loop and end-to-end setting. The evaluation needs to interact with the simulator in a real-time manner.
  2. Sorry, I can't reproduce the error and I haven't encountered this problem. Did you run it on a docker/server environment? Maybe you can try a newer version of pygame. Or remove pygame.HWSURFACE | pygame.DOUBLEBUF when you initialize the display. Or comment on corresponding pygame codes to run the evaluation without the GUI.
weimengchuan commented 5 months ago

Thanks for your fast response. I will try according to your recommend.