Open edvart-ros opened 2 years ago
It is like the wrong depth is used for TF. The algo is pretty simple: https://github.com/introlab/find-object/blob/f7c38f91d0b076f365592974ff75972aa668b7f1/src/ros/FindObjectROS.cpp#L87-L92 it finds the depth in the middle of the object, then apply camera calibration matrix to find the 3D point accordingly to camera.
You may put side-by-side the depth and rgb image, just to make sure they match perfectly. It seems so as the color in the point cloud seems at the good place. With the right case, the z value is 1.18 meters, while left case it is 0.55 meters. Other thing could be the calibration camera matrix that is wrong (for example the width/height in camera_info doesn't match the actual size of the depth image).
This package works pretty well, but ive come across a strange bug. Whenever the object that is detected crosses a certain threshold, the tf is messed up. The object is still seemingly detected correctly, but there is some trouble when it tries to publish the tf for the object.
GUI: Rviz: Console output:
When the object is on the other side of this threshold, it gets the pose correctly, with pretty good accuracy. I'm not 100% sure about this, but it seems like the pose is wrong as soon as it crosses the z axis (when x goes from positive to negative value), so maybe that helps.
functioning normally on other side of threshold:
GUI:
Rviz:
Console output:
I dont really understand whats going on here... any ideas? again, the camera depth is working, and the object detection itself seems to work, but something is going wrong with the transform i think.