autonomousvision / neat

[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving
MIT License
303 stars 47 forks source link

The dataset format #9

Closed exiawsh closed 1 year ago

exiawsh commented 2 years ago

Hello, Thank you for your great work if I generate the dataset correctly, what will be its folder placement structure? Could you give me an example?Just a screenshot is ok.

kashyap7x commented 2 years ago

The data generated is structured as follows:

- SAVE_PATH: provided in run_evaluation.sh
    - {routes_file_name}_{timestamp}: contains data for an individual route
        - rgb_{front, left, right, rear}: multi-view camera images at 400x300 resolution
        - seg_{front, left, right, rear}: corresponding segmentation images
        - depth_{front, left, right, rear}: corresponding depth images
        - lidar: 3d point cloud in .npy format, containing the forward-facing 180 degrees as provided on the Leaderboard
        - lidar_360: point cloud with a 360 degree FOV
        - topdown: BEV segmentation images required for NEAT's auxiliary loss
        - measurements: contains ego-agent's position, velocity and other metadata

Note that the seg, depth and lidar folders are not required for training NEAT, but are used by some of the baselines in our paper.

exiawsh commented 2 years ago

The data generated is structured as follows:

- SAVE_PATH: provided in run_evaluation.sh
    - {routes_file_name}_{timestamp}: contains data for an individual route
        - rgb_{front, left, right, rear}: multi-view camera images at 400x300 resolution
        - seg_{front, left, right, rear}: corresponding segmentation images
        - depth_{front, left, right, rear}: corresponding depth images
        - lidar: 3d point cloud in .npy format, containing the forward-facing 180 degrees as provided on the Leaderboard
        - lidar_360: point cloud with a 360 degree FOV
        - topdown: BEV segmentation images required for NEAT's auxiliary loss
        - measurements: contains ego-agent's position, velocity and other metadata

Note that the seg, depth and lidar folders are not required for training NEAT, but are used by some of the baselines in our paper. Is the data structure like this one? And where are the train_towns =['Town01'] mentioned in the config? 图片

kashyap7x commented 2 years ago

Yes, the structure is correct. Here you used eval_routes_weathers.xml as the ROUTES variable in run_evaluation.sh. To generate the towns in our training dataset, you would need to use the routes files from here: https://github.com/autonomousvision/neat/tree/main/leaderboard/data/training_routes.

exiawsh commented 2 years ago

Yes, the structure is correct. Here you used eval_routes_weathers.xml as the ROUTES variable in run_evaluation.sh. To generate the towns in our training dataset, you would need to use the routes files from here: https://github.com/autonomousvision/neat/tree/main/leaderboard/data/training_routes.

ok, I got it. Thank you very much!

exiawsh commented 2 years ago

Yes, the structure is correct. Here you used eval_routes_weathers.xml as the ROUTES variable in run_evaluation.sh. To generate the towns in our training dataset, you would need to use the routes files from here: https://github.com/autonomousvision/neat/tree/main/leaderboard/data/training_routes.

Sorry, I use carla for the first time, I still have some questions. If I want to generate the towns in your training dataset, I just need to modify the the ROUTES variable in run_evaluation.sh? Because I have got some error. I mean, shall I also need to modify some other variables?

exiawsh commented 2 years ago

In fact, I pay more attention to the semantic segmentation process of BEV by using implicit neural representation, so I want to use data for debugging quickly... I'm sorry to bother you.

exiawsh commented 2 years ago

I have solved the above problems. It seems that I should restart terminal, when change the var.

kashyap7x commented 2 years ago

If you want to quickly access some data, we have released the dataset for TransFuser here: https://github.com/autonomousvision/transfuser#dataset

See the readme in that repository for more details. The release includes a large scale dataset (406GB) which includes the BEV semantic segmentation labels for NEAT.

exiawsh commented 2 years ago

If you want to quickly access some data, we have released the dataset for TransFuser here: https://github.com/autonomousvision/transfuser#dataset

See the readme in that repository for more details. The release includes a large scale dataset (406GB) which includes the BEV semantic segmentation labels for NEAT.

But you have said that there are some diffierences between the two repos? Could I use the dataset for TransFuser directly?

kashyap7x commented 2 years ago

The main difference is the weathers (in TransFuser, we use 14 fixed weathers, and in NEAT, we randomly sample the weather parameters from Gaussian distributions). There are also minor differences in how the autopilot drives. If you are not specifically interested in robustness to new weather conditions, you can directly use the dataset from TransFuser.

exiawsh commented 2 years ago

The main difference is the weathers (in TransFuser, we use 14 fixed weathers, and in NEAT, we randomly sample the weather parameters from Gaussian distributions). There are also minor differences in how the autopilot drives. If you are not specifically interested in robustness to new weather conditions, you can directly use the dataset from TransFuser.

OK, thank you for your patience. Your work is really great! As a newcomer, I admire you very much!