autonomousvision / transfuser

[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
MIT License
1.12k stars 186 forks source link

Run Multiple Evaluations #149

Closed mmahdavian closed 1 year ago

mmahdavian commented 1 year ago

Hello

I have trained multiple models and I want to evaluate them at the same time to save time. Is it possible to make the evaluation for different models in a same GPU? What am I suppose to change? right now I am changing the SAVE_PATH to "data/ADVERSARIAL2/" and PORT to 2002 (I run a carla simulator with the same port number) and TEAM_CONFIG to a different model folder and I get segmentation fault.

./leaderboard/scripts/run_evaluation.sh: line 43: 2629179 Segmentation fault (core dumped) python3 ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py --scenarios=${SCENARIOS} --routes=${ROUTES} --repetitions=${REPETITIONS} --track=${CHALLENGE_TRACK_CODENAME} --checkpoint=${CHECKPOINT_ENDPOINT} --agent=${TEAM_AGENT} --agent-config=${TEAM_CONFIG} --debug=${DEBUG_CHALLENGE} --record=${RECORD_PATH} --resume=${RESUME} --port=${PORT} --trafficManagerPort=${TM_PORT} --weather=${WEATHER}

Thank You

Kait0 commented 1 year ago

To evaluate multiple models you will need multiple CARLA servers. By default CARLA will crash if you try to spawn a second server on the same GPU. From what I heard it is possible to run multiple CARLA servers on one GPU if you run the server inside a docker environment, but I have no experience with this myself.

The second thing is that both simulators need to use distinct ports. CARLA uses 3 different ports world port, streaming port and traffic manager port. CarlaUE4.sh -carla-rpc-port=${FREE_WORLD_PORT} -nosound -carla-streaming-port=${FREE_STREAMING_PORT} -opengl

Lastly I am not sure if you can run multiple instances of the client port on one GPU, If it doesn't work you can also run multiple instances inside docker containers.