autonomousvision / transfuser

[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
MIT License
1.12k stars 186 forks source link

Artifacts in the LiDAR image #221

Closed st3lzer closed 3 months ago

st3lzer commented 4 months ago

Hello, I have a question regarding the sensor position of LiDAR: During data generation, I lowered the LiDAR sensor so that the engine hood of the ego-vehicle is almost visible (invisible), meaning the field of view (FOV) of the LiDAR sensor starts just above it. But when using the same sensor configuration and position for evaluation, the engine hood is slightly visible. However, if I raise the sensor by just 2 cm during evaluation, the engine hood becomes invisible again.

I understand that internal processing differs based on the sensor position, but I am referring specifically to a visualization of the raw npy files directly from the sensor. I am aware that two parameters in the LiDAR sensor configuration differ between data generation and evaluation (DATAGEN), but these parameters do not cause this issue. The vehicle type is the same in both settings.

Do you have any idea what could be causing this? Is there a different scaling of vehicles in these two modes or are points from the sensor detected earlier?

Kait0 commented 4 months ago

Hm, I don't know. I expect the point clouds to be the same if you use the same settings. The code changes the coordinate system of data at various places, but if you extract the points clouds directly from the CARLA leaderboard there should be no difference if the parameters are set the same.

Some pointers for debugging: You can check the parameters here.

To get the raw data from the carla leaderboard you need to pick it up here

Docu of the LiDAR sensor .