CentroEPiaggio / pacman-DR53

This repository contains the necessary tools to run the demo contemplated in DR 5.3 of the PaCMan Project.
GNU General Public License v2.0
0 stars 0 forks source link

Visual object tracking for visual servoing #20

Open carlosjoserg opened 9 years ago

carlosjoserg commented 9 years ago

This issue depends on #19 and the rigid-body-tracker that will be used to improve CentroEPiaggio/phase-space#2

The algorithm for the object-not-grasped situation could be something like:

init

  1. Estimate object pose (O) from the complete object database
  2. Init rigid-body-tracker with O
  3. Use the rigid-body-tracker uncertainty measure to build the corresponding uncertainty box (B) (this is not the bounding box, it is a box whose size is inversely proportional to how certain the rigid-body-tracker is w.r.t. to the tracked pose) loop
  4. Use the rigid-body-tracker prediction to update pose and size of B
  5. Filter the scene point cloud using a pass-through filter using the updated B as limits
  6. Estimate object pose O in the filtered scene using the recognized object in step 1 only
  7. Update rigid-body-tracker with the new measurement O end

In the object-grasped situation, we need to handle hand occlusions. With the IMU-glove we can have an accurate estimation of the joint angles, so we can extract points that belong to the hand. in the counterpart, we can benefit from knowing the hand pose (H) when it is mounted in the arm. The algorithm is to be modified in the following steps:

  1. Use the rigid-body-tracker prediction to update the size of B. Use the hand pose H to update the pose of B, if the hand is mounted on the arm, or use rigid-body-tracker prediction to update pose otherwise .
  2. Filter the scene point cloud using a pass-through filter using the updated B as limits and extract points corresponding to the hand.
hamalMarino commented 9 years ago

Just to mention, I found this which could be used as well in case we associate with each 3D view a 2D image (I'd go for BW image only). I'm not sure about the richness of features required for the algorithm to work, though.

Obviously, this does not exclude the necessity of the rigid-body-tracker, just allows to use different inputs than those which can be given as in CentroEPiaggio/phase-space#2.

carlosjoserg commented 9 years ago

That's a great input, indeed. In general, it looks like there are RGB-only and Depth-only ways to estimate an object pose.

One I remember using RGB-D is the Hierarchical Matching Pursuit from University of Washington, Dieter Fox's group. This is the related latest paper.

Recall that in CentroEPiaggio/phase-space#2 we need to be consistent with object reference frames w.r.t. the hand. The object pose estimator should be the same, or at least use the same object meshes, in both cases for grasp acquisition and for online demos.

Tabjones commented 9 years ago

Hi all, i made a prototype visual object tracker this morning. It's not that fast nor super accurate, but it actually works! I made some videos to show you, get them there.

Right now it goes roughly at 3Hz and it doesn't get all the poses right, but as i said it's a prototype and i could improve it.

A few things to notice are:

hamalMarino commented 9 years ago

Unfortunately we cannot ignore rotations around an axis of symmetry as all grasps are defined with full rotation matrices. Anyway, we can work around this by not checking the rotation about axes of symmetry if the object has any (e.g. adding such a field in the object description table), let's think about it!

I'll try to see the videos but internet is not working that well here... Maybe later I can manage to download them!

On Tuesday, April 14, 2015, Federico notifications@github.com wrote:

Hi all, i made a prototype visual object tracker this morning. It's not that fast nor super accurate, but it actually works! I made some videos to show you, get them there http://131.114.31.70:8080/share.cgi?ssid=0qZE5kV.

Right now it goes at roughly at 3Hz and it doesn't get all the poses right, but as i said it's a prototype and i could improve it.

A few things to notice are:

  • Rotations around the object axis of simmetry (the blue Z axis in the videos) are not detected correctly or fully. I think i can improve it, but IMO it's not a big issue and it should not happen for non-simmetric objects.
  • The tracker does not care much if you put hands on the object, estimations are reasonably correct with hands on it.
  • For standard movements, like pick and place, trackers seems to perform nicely.

— Reply to this email directly or view it on GitHub https://github.com/CentroEPiaggio/pacman-DR53/issues/20#issuecomment-92914291 .


Ing. Hamal Marino PhD Student in Automation, Robotics, and Bioengineering

Research Center “E.Piaggio” Faculty of Engineering - University of Pisa Largo Lucio Lazzarino, 1 56122 Pisa - Italy Tel. +39.050.2217050 Fax +39.050.2217051

Email: hamal.marino@centropiaggio.unipi.it

http://www.linkedin.com/in/hamalmarino: linkedin.com/in/hamalmarino

carlosjoserg commented 9 years ago

tracker1.ogv looks really good!

It seems like the pose is not being filtered, that is, you are detecting the pose at the most recent point cloud frame, right?

Are you using a box to extract out the interesting part of the cloud after the first detection as outlined above in the issue?

Tabjones commented 9 years ago

@carlosjoserg Yes, there's no kalman filtering nor any filtering at all!! And yes im using a fixed box of about 40cm width centered on the object (so it follows the object around).