Closed lzhnb closed 2 years ago
This is fine. There are some cases when the vehicles in front of the ego-vehicle get stuck due to which the ego-vehicle does not move, resulting in agent blocked error.
After running the run_evaluation.sh
to generate the dataset. I get the dataset directories like carla_results/auto_pilot_eval/eval_routes_weathers_02_...
However, I find the config of training in neat/config.py
:
class GlobalConfig:
""" base architecture configurations """
# Data
root_dir = '/is/rg/avg/kchitta/carla9-10_data/2021/apv3'
train_towns = ['Town01', 'Town02', 'Town03', 'Town04', 'Town05', 'Town06', 'Town07', 'Town10']
val_towns = ['Town01_long', 'Town02_long', 'Town03_long', 'Town04_long', 'Town05_long', 'Town06_long']
train_data, val_data = [], []
for town in train_towns:
train_data.append(os.path.join(root_dir, town))
train_data.append(os.path.join(root_dir, town+'_small'))
for town in val_towns:
val_data.append(os.path.join(root_dir, town))
Obviously, there should be some directories like carla_results/auto_pilot_eval/Town...
How can I get such results?
You need to generate data for each town separately by setting the ROUTES
variable in run_evaluation.sh
to the corresponding routes file.
You need to generate data for each town separately by setting the
ROUTES
variable inrun_evaluation.sh
to the corresponding routes file.
Thanks, so I need to So set all the files‘ names under leaderboard/data/training_routes
to ROUTE
variable in run_evaluation.sh
and run this script to generate the training data, right?
What I am concerned about now is how much time and space it will take to generate this dataset?
Thanks, so I need to So set all the files‘ names under
leaderboard/data/training_routes
toROUTE
variable inrun_evaluation.sh
and run this script to generate the training data, right?
That's right.
What I am concerned about now is how much time and space it will take to generate this dataset?
It took us 2-3 days to generate the data on 8 1080Ti GPUs and the total size was around 400G.
Thanks, so I need to So set all the files‘ names under
leaderboard/data/training_routes
toROUTE
variable inrun_evaluation.sh
and run this script to generate the training data, right?That's right.
What I am concerned about now is how much time and space it will take to generate this dataset?
It took us 2-3 days to generate the data on 8 1080Ti GPUs and the total size was around 400G.
Thanks for your quick reply! Does 8 1080Ti GPUS mean that running 8 CARLA servers individually, and each server is responsible for one route file?
Yes
Hi, Thanks for the amazing work! I followed the Data Generation section in
README.md
(I did not modify any configurations and just activated the export ofSAVE_PATH
in the script). The following video shows the generated front images:| As shown in the above video, the weather is changing, and this example(as one RouteScenario) consists of 94 frames (Different scenarios have different frames).
For some scenarios, they met the failures like the followings:
Does it work? Is there anything wrong with this?