autonomousvision / transfuser

[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
MIT License
1.11k stars 185 forks source link

Weird vehicle intersection prediction #77

Closed cozeybozey closed 2 years ago

cozeybozey commented 2 years ago

Hi, I am trying to run the new expert agent of the 2022 updated github. However when I run the simulation with visualization on I get this:

image

I made the bounding box located towards the back blue. Both bounding boxes curve all the way to the right for no apparent reason. This curving away from the trajectory always seems to happen unless the car is standing still. I was wondering whether you can give some insight as to why this is happening.

kashyap7x commented 2 years ago

Our autopilot attempts to forecast the motion of other vehicles over a long time interval (4 seconds) at intersections.

https://github.com/autonomousvision/transfuser/blob/cc222fe0ae97dafdecabcc53cd43c52e07d6be25/team_code_autopilot/autopilot.py#L79

To forecast, we make an assumption that the background traffic in CARLA will continue to repeat their current action in the future. While this assumption generally holds, it leads to errors like the curved predictions you show when the forecast is for a long time horizon. The car being forecasted has a very small steering angle in the current frame, but when this action is repeated multiple times in our forecasting mechanism the drift accumulates and the final predictions are no longer in the center of the lane.

Despite these errors in the long-term prediction, we observe that the autoplilot still avoids collisions reasonably well since the forecasting is more accurate for shorter time intervals.

kashyap7x commented 2 years ago

Actually, I think I misunderstood your question (I guess you are talking about the ego vehicle and not the background vehicle). Could you run the code with the env variable DEBUG_CHALLENGE=1 in local_evaluation.sh, so we can also look at the route that needs to be followed in this frame you visualized?

cozeybozey commented 2 years ago

Yes I am indeed talking about the ego vehicle, the background vehicles seem to work perfectly fine. So what I am trying to do is use your expert in our own environment which has its own route planner. You can see the red numbers in the picture, which indicate the waypoints in our route planner. This is what I now give to your expert agent and it is following the trajectory correctly so I assumed that it was working. But do you think the given route can affect those bounding boxes as well?

kashyap7x commented 2 years ago

Yes, the bounding boxes drawn for the ego vehicle depend on the given route. We don't use the action repeat assumption I mentioned earlier for the ego vehicle. Instead, we use a PID controller at every future timestep to get the steering angle that will keep the vehicle in the center of the lane:

https://github.com/autonomousvision/transfuser/blob/cc222fe0ae97dafdecabcc53cd43c52e07d6be25/team_code_autopilot/autopilot.py#L708

There may be some inconsistency with this controller now that you have modified the given route. Please make sure that your newly provided route is input to both self._turn_controller which is responsible for the actual driving behavior, and self._turn_controller_extrapolation which is used for the collision checks. The bounding boxes we plot depend on the route followed by self._turn_controller_extrapolation.

cozeybozey commented 2 years ago

Thanks a lot for the help, I figured out the problem. I am using ground truth x and y locations from the carla simulator whereas you are using gps and imu data, which is in a different format. So that is why the bounding boxes in my case are way off. However I do wonder why is it that you guys actually use gps and imu data for a privileged expert agent? Doesn't the privilege just allow you to use the actual carla locations and rotations?

kashyap7x commented 2 years ago

You are right, we could directly use positions from the CARLA simulator. Using the GPS data was a design choice we inherited from the expert agent implementation of Learning by Cheating.