bayesian-object-tracking / dbrt

Depth-Based Bayesian Robot Tracking
GNU General Public License v3.0
20 stars 7 forks source link

Use the package to get the pose of the robot in the camera frame #2

Open simheo opened 4 years ago

simheo commented 4 years ago

Hi,

I am working on a project which uses the dbrt package for a 7 DOF robotic arm. We used our URDF file, we correctly adapted the fusion_tracker_gpu.yaml file to our robot and it works well on RVIZ.

I would like to recover a bounding box containing the robot (and ideally later containing only the end-effector) from the RGB-D image. My goal is to have the pose of the end-effector in the camera frame. Is it possible to use the package to do that ? If yes, could I have some directions on where I should work in the package ?

Thanks in advance !

wumanu commented 4 years ago

Hi!

The output of dbrt is the joint angles of the robot, anything else has to be handled outside of the package. You can use these joint angles to obtain the pose of any robot link through the forward kinematics. The ROS tf package for instance does that for you, in fact if you are visualizing the robot model you are already doing that. So you can for instance obtain the pose of your end-effector link from tf, and then define some bounding box around that.

I hope that helps!

simheo commented 4 years ago

Thanks for the fast answer !

So I should use the tf between /estimated/ORIG and /camera_link to convert the joint angles from dbrt to pose in the camera frame ?

One of my concerns is that here, for my situation, the camera is not mounted on the head of the robot but is looking at the arm. Thus, in my urdf file I added the link between the camera and the base_link and specified the transform between the two.

So I was wondering, imagine during the tracking I move drastically (for example 15 cm) my camera, would dbrt still be able to track the arm ? When I ran, "rosrun tf tf_echo /estimated/ORIG /camera_link", the tf did not change when I was moving the camera.

I may have done something wrong. I don't have a lot of experience with ROS and the notions of tf...

Another question, how did you get the transformation between the /estimated/ORIG and the /camera_link in your case ?

wumanu commented 4 years ago

you need to figure out two things: 1) which is the frame of your camera? in the camera.yaml you specify which image is used by dbrt, and which camera_info. you can echo the camera_info topic, and you will see the camera frame. 2) which is the frame of the end-effector (or whatever link you care about)? you can visualize the frames in rviz (tf), and see which one you care about.

then you ask tf for the transform from estimated/frame1 to estimated/frame2 (or the other way round, depending what's your goal).