Open josh-wende opened 2 years ago
Are you using pure visual tracking or lidar? Theoretically, the coordinates of the obstacles can be correctly transformed by simply changing the rotation matrix
@daohu527 thanks for your response. I'm trying to use both lidar and cameras for obstacle detection. I haven't changed anything in the fusion component.
Where might be the best place to change the rotation matrix? It seems like the obstacle postprocessor is the place where this should happen, but I'm struggling to find a good way to adjust the detections so that everything else (tracking, camera fusion, camera/lidar fusion) gets correct and consistent info.
You need to adjust the parameters in the file front_6mm_extrinsics.yaml
@daohu527 I did, that's what I meant when I mentioned changing the camera's rotation--sorry for not being clearer. It seems that the camera-based perception only accounts for cameras being pitched up or down, and otherwise assumes the camera points straight forwards. So even with the extrinsics files updated, the bounding boxes are being drawn rotated from where they should be.
An update: it seems that when I rotate both obstacle cameras together, things work fine. When the cameras have non-pitch rotation from each other, though, things don't quite work right. It seems that the camera listed second in the config for fusion_camera_detection_component
has its detections output as if it was pointing the same way as the first camera.
Hi,
I am trying to get camera-based perception to work with two cameras that point forwards, but one is rotated a bit to the left and the other is rotated a bit to the right. When outputting obstacle positions, though, it seems to always assume the camera is pointing straight forwards, even if I change the camera's rotation in
calibration/data
. Is there a way to configure Apollo to transform the detections, or will I need to add this functionality to the perception module myself?Thanks in advance for any help.