autonomousvision / transfuser

[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
MIT License
1.17k stars 192 forks source link

How to change camera positions before Evaluating #233

Closed tijaz17skane closed 3 months ago

tijaz17skane commented 3 months ago

I am trying to evaluate transfuser on a bigger vehicle. (carlaCola), If it try to use the default settings with the car model changed only, I get pretty bad results, the truck wanders off the road and fails to perform basic maneuvers. I think this is because the camera positions were set according to a sedan, and i'll have to change it for the truck.

I see some relevant code in scenario_manager_local.py>setup_sensors.py, but can't figure out how to change the positions, also how to visualize what the camera is looking at, before going on with it.

and... If someone can comment on whether a model trained using a car autopilot would work on a bigger vehicle or not. (i dont see why it wouldn't given that both can follow a bicycle model).

Kait0 commented 3 months ago

The problem you are facing is called Viewpoint Robustness. Neural networks are not naively viewpoint robust which means they only work on the sensor configuration they have been trained for. E.g.: here is a recent paper in that direction. https://arxiv.org/abs/2309.05192

The problem is likely more the computer vision and not the vehicle dynamics (for which you could just tune the controller). CARLA simulates physics and does not use a bicylce model as far as I know.

For your case the best solution would be to collect a new dataset with a new sensor configuration designed for the carlaCola truck. You need to write a script for your compute cluster to do that. We provide a slurm script in this repo that should be easy to adapt.

tijaz17skane commented 3 months ago

Thank You so much for your response, I am looking into the solutions.