Closed xanhug closed 1 year ago
Hi! It looks like that the Python environemt in Docker lacks cv2 package. You may check the file Dockerfile.master and add the command that install relative packages into the file. One solution is to install packages and fix other bugs in your docker until it can run normally. At the same time, record the operations and copy them into the Dockerfile.
For making a docker, i spent a lot time on it which needed much thing to do. I upload my old Dockerfile.master file to leaderboard/scripts/Dockerfile_example.master and hope it can help you (i can not make sure that it can build a usable docker now).
By the way, using the code and model directly provided by the repo may not get similar result in the leaderboard. The model is only a demo for the showcase.
@deepcs233 Many thanks for your kind help
Hi, @deepcs233 I was wondering do we have to use CUDA 9.0 provided in dockerfile in order to submit to online leaderboard? Since CUDA 9.0 is so outdated and we will have to downgrade pytorch version a lot compared to your requirement.
Sorry, i just tried to use the default CUDA version provides in the dockfile. But i only converted the ckpt format for a lower version PyTorch when submitted the model to the leaderboard, the other code worked well. I don't know the driver version of the test server, it may support a higher CUDA version.
Sorry, i just tried to use the default CUDA version provides in the dockfile. But i only converted the ckpt format for a lower version PyTorch when submitted the model to the leaderboard, the other code worked well. I don't know the driver version of the test server, it may support a higher CUDA version.
Hi @deepcs233 After some modification, i was able to use docker to evaluate with local carla simulator. I got few questions about online submission: Do i need to modify ./leaderboard/scripts/run_evaluation.sh file for online submission? Currently i am using mostly your code but changed some paths. Do i need to modify export ROUTES=leaderboard/data/training_routes/routes_town05_long.xml? Does the online server automatically change the evaluation route or should we use the default setting provided by the original leaderboard repo?
Hi
Hi
- Probably not required
- Not required
- In my experience, the online server will automatically change the evaluation route and other environment path.
- You need to make sure in your local docker: after add some needed environment variables(TEAM_CODE, CARLA_ROOT, ROUTES...), it can run the evaluation normally.
Dear @deepcs233 , I tried to retrain your model and submit to online leaderboard. I collected around 1.5TB of data. However, my evaluation driving score is only around 30%. Route complement 68.208 and infraction rate 0.479.
This score is way too low compared to your reported result. Any hints what might have gone wrong?
I'm sorry, that didn't occur for me. But i have some suggestions:
I'm sorry, that didn't occur for me. But i have some suggestions:
- The online leaderboard does not look like it will consider 'stop sign', and you may modify your training code.
- The controller hyper-parameters could be adjusted, depending on your specific model.
- Have you collected night data, and have you removed frames that were not moving for a long time?
Dear @deepcs233 Thank you for your reply. A1. Are you suggesting that we should remove slow down actions for 'stop sign' during online leaderboard evaluation?
I have also noticed during local evaluation, when approaching "stop sign", trajectory prediction is very short(almost none) and agent is not able to move any more. Did you meet this issue before? I guess something wrong happened during data collection for "stop sign" scenario.
Speaking of data collection, I noticed many collected data contains frames that ego-vehicle is blocked for a long time. I tried to recollect the same scenario, but they are also blocked, seems nothing much changed. Is there anyway to overcome this issue? I guess simple removing them would cause agent not able to learn such scenario?
I will look into hyper-parameter adjustment and I have collected night data. By the way, my local CARLA 42 routes benchmark has better result which is around 80% driving score, despite still 10% lower than Interfuser reported value. But the difference is a lot smaller than online leaderboard.
Looking forward for your reply : )
Hi! Sorry for the late response, i haven't notice it because it's a closed isse:)
I have also noticed during local evaluation, when approaching "stop sign", trajectory prediction is very short(almost none) and agent is not able to move any more. Did you meet this issue before? I guess something wrong happened during data collection for "stop sign" scenario.
I haven't noticed this situation. The predicted trajectoryin our project is the future route that the ego-car will drive along, like a navigation route. So i think the collected GT here may not be very short?
Speaking of data collection, I noticed many collected data contains frames that ego-vehicle is blocked for a long time. I tried to recollect the same scenario, but they are also blocked, seems nothing much changed. Is there anyway to overcome this issue? I guess simple removing them would cause agent not able to learn such scenario?
There is no need to re-collect the scenarios. We can delete the blocked frames and re-arange them. The corrsponing code can be found in tools/data.
sorry for bother again and thanks to your help My tutor has asked me to reproduce your experimental result,and i want finish this task by submit interfuser agent to carla leaderboard,but the
leaderboard_evaluator.py
seems not work normally in docker,I'm not sure if it's due to I'm not able to configure the "Dockerfile.master" file,and here is the logonce i run the
run_evaluation.sh
in terminal ,it seems working normally like thatthanks for your help.