autonomousvision / transfuser

[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
MIT License
1.12k stars 186 forks source link

training weather #139

Closed mmahdavian closed 1 year ago

mmahdavian commented 1 year ago

Hello

Thank you for your great repository. I was wondering which weathers did you use to train your model in the paper? Is it only clearNoon weather similar to validation dataset?

Thank You

kashyap7x commented 1 year ago

I have added the relevant information (Fig. 1b in Section 2.5 of the CVPR supplementary material) below:

image

We sequentially vary the weather condition after every 30 seconds in each route. However, since a significant proportion of the routes used for generating data are short, the weather distribution is skewed towards the initial weathers (Experiments data in Fig. 1b) since all the 14 kinds of weather are not iterated through. This is not a problem for our internal evaluation since we fix the environmental conditions to ’ClearNoon’ during evaluation, to decouple the main new challenge (scenarios) from factors related to visual generalization. However, for submission to the CARLA Leaderboard (Sec. 5), we regenerate our data while varying the weather condition after every 0.5 seconds of driving so that the distribution is uniform across all weathers (Leaderboard Data in Fig. 1b)

Both the imbalanced "experiments" dataset (clear_weather_data) and balanced "leaderboard" dataset (14_weathers_data) were uploaded in our cvpr2021 branch.