wg-perception / capture

An object capture toolbox.
7 stars 20 forks source link

Mask image remains black #6

Closed beetleskin closed 11 years ago

beetleskin commented 11 years ago

Hi again,

I tried to run the object_recognition_capture with a template but the mask image remains dark and no capturing is performed:

rosrun object_recognition_capture capture -i my_textured_plane -o orc_scan_dataIntensiv.bag -n 12 --preview --seg_z_min 0.0001

The pose estimation seems to work, the coordinate origin is reprojected correctly onto the template plane: orc_no_mask

I tried different optional parameter but nothing changed. I compiled ecto and wg_perception completely from source within a catkin groovy workspace on Ubuntu 12.4, the data comes from a Kinect.

vrabaud commented 11 years ago

Great thx, it works here from packages so:

On Fri, Mar 1, 2013 at 6:19 AM, beetleskin notifications@github.com wrote:

Hi again,

I tried to run the object_recognition_capture with a template but the mask image remains dark and no capturing is performed:

rosrun object_recognition_capture capture -i my_textured_plane -o orc_scan_dataIntensiv.bag -n 12 --preview --seg_z_min 0.0001

I tried different optional parameter but nothing changed. I compiled ecto and wg_perception completely from source within a catkin groovy workspace on Ubuntu 12.4, the data comes from a Kinect.

— Reply to this email directly or view it on GitHubhttps://github.com/wg-perception/capture/issues/6 .

beetleskin commented 11 years ago

Yes, as you can see in the image the pose is drawn correctly in most frames. Also the matches look good. Could this be related to the openni-drivers? I built and installed OpenNi and SensorKinect from source.

vrabaud commented 11 years ago

Ah, my bad, I answered through mail and did not see the image. I looks good. You never ever get anything in the mask ? (even when you're far, close, seeing from the top or with a different object ? )

beetleskin commented 11 years ago

Nope, nothing ever. Just black.

vrabaud commented 11 years ago

ok, might be related to your other bug. As you have everything from source, source your setup.sh and then run any of those two scripts in ecto_opencv/samples/rgbd/plane* (one tracks planes, the other one segments object on top). You should have colors for each plane it finds, please let me know how that goes.

beetleskin commented 11 years ago

Interesting! The _planecluster.py crashes:

stfn@stfn-MacBook:~/devn/ros_dai_groovy_catkin$ python src/ecto_kitchen/ecto_opencv/samples/rgbd/plane_cluster.py 
Traceback (most recent call last):
  File "src/ecto_kitchen/ecto_opencv/samples/rgbd/plane_cluster.py", line 37, in <module>
    connections = [ source['depth_raw'] >> depth_to_3d['depth'],
  File "/home/stfn/devn/ros_dai_groovy_catkin/src/ecto_kitchen/ecto/python/ecto/blackbox.py", line 255, in __getitem__
    return self.__impl[key]
ecto.EctoException:            exception_type  EctoException
                 diag_msg  no inputs or outputs found
                cell_name  Source
              tendril_key  depth_raw

May this be related to the malfunctioning openni driver (I run into this problem with roboearth). _capture_openniusb.py yields the same error. The _planesample.py however seems to work fine: plane_sample1 plane_sample2 plane_sample3

vrabaud commented 11 years ago

The plane cluster is fine, just git pull it, I fixed it the other day. Those results look fine though.

vrabaud commented 11 years ago

Ok, I do remember a problem that I forgot to fix that seems to fit your data: no pixel of your object is touching the plane, there are only NaN 's around it. Let me fix that one at least.

vrabaud commented 11 years ago

actually, did you try to change the --seg_radius_crop value ? Set it to something large like 0.5 or 1 meter.

beetleskin commented 11 years ago

Ok thanks. This is what the clustering looks like: plane_clusters1

vrabaud commented 11 years ago

ok, sorry to be picky, can you please try with an object whose depth will be reflected by the Kinect (this one here has glass it seems) like a cardboard orange juice box. I agree it is a bug though but I just want to narrow it down and make sure it is due to all the NaN around your objects (it also has a strong shadow here).

beetleskin commented 11 years ago

ok, here you go (with yet another ugly template :) ) _planecluster.py plane_cluster2 _planesample.py plane_sample4

vrabaud commented 11 years ago

ok, I was finally able to reproduce that bug and it was a C++ bug :) Can you please download the latest of capture and try it out ? Some flickering might still happen because of the NaN but I am on it.

beetleskin commented 11 years ago

Ok now the capturing with templates produces bag files, thanks! However (oh no issues incomming :) ) ...

  1. Some masks still seem to be empty
  2. Sometimes the clustering selects the wrong cluster (why don't you only consider points within a 3d bounding box above the template/dot pattern?)
  3. When running rosrun object_recognition_reconstruction mesh_object --all --visualize --commit even the correct scans (where the mask fits) are not aligned. Some Time ago I read, that you only allow z-Rotation around the center of the template/dot pattern (lazy suzan). Is this still the case? If yes, why? :) If no, then I guess the template matcher just produces this offset.

I uploaded you the bag file here.

vrabaud commented 11 years ago

1) fixed I think (if no pose was found, clustering was still happening (we were always careful and never faced that)) 2) fixed too (yop, distance to the plane was not absolute which can be problematic if you have a plane under your main plane (I never had that config)) 3) fixed too thx to the above I think: a cylinder centered at the pose is used, I updated the docs. The pose is determine by the pattern only, you can move the board/camera in any way you want.

Fixes are in capture and ecto_opencv.

Thx for your very detailed bug report, that helps make the code more robust, I hope it's all good for you now ! Off to #7 now :)

beetleskin commented 11 years ago

Hej, thanks for the updates. I just updated the sources ... is it correct, that the capture package now depends on _household_objectdatabase? Because I'm stuck now at installing the dependencies. Apparently there is nothing in the ros repo except _household_object_databasemsg.

vrabaud commented 11 years ago

tabletop is the only one depending on it and that is one of the pipelines so you can remove it safely if you don't want to use that pipeline. The package is out but only in shadow-fixed (not ros), you can get it from here: https://github.com/ros-interactive-manipulation/household_objects_database (or wait for the package to make it into ROS). I'll update the docs for it, thx.

beetleskin commented 11 years ago

ok thanks, I just removed tabletop. I'll try the fixes soon :)

beetleskin commented 11 years ago

Ok it looks a lot better, but there are some scans which are not aligned. Is this due to an insufficient template? Can I improve the merging further? Have a look at the mesh after mesh_object - its something, but not a bin ;)

beetleskin commented 11 years ago

Ok I used a better template and scanned very carefully. The model looks ok, so I guess you can close this issue. Capturing with templates works very well now.

Just one thing ... is there a textured version of the mesh somewhere? Do you produce a UV-map ore something? How do you store the color information of a model when uploading/meshin?

vrabaud commented 11 years ago

For the non-alignment, we've noticed the Kinect is much worse than the ASUS when hand-held (most likely because the Kinect is not synchronized). That's why we recommend a lazy-Suzanne. The mesh has no color as we don't need it for grasping or LINE-MOD. We also use straight up meshlab to produce the mesh. If you know of a library/program that can produce a mesh with texture from a colored point cloud, please let us know !

beetleskin commented 11 years ago

So you don't store any color information? What about tod?

For the texturing, pcl_kinfu_largeScale_texture_output comes into my mind, but I don't know if this is of any help. I didn't check what input they are using exactly.

vrabaud commented 11 years ago

Color is not stored right now but it should obviously be stored. TOD does not use the mesh (but should if meshes+texture were of great quality): it just uses the raw 2d input for descriptors and their 3d position. Large scale kinfu is unstable and requires a specific GPU (as it requires real time): we'd like to keep ORK as generic as possible. There has to be a library around that does that but the result would not be of any use anyway (except for color LINEMOD that is soon to come).

beetleskin commented 11 years ago

Oh I don't mean kinfu in general. They have a script whicht produces a texured mesh output from colored pointclouds: http://svn.pointclouds.org/pcl/trunk/gpu/kinfu_large_scale/tools/standalone_texture_mapping.cpp

vrabaud commented 11 years ago

Thx for the reference, I added an issue for it: #12

mshb88 commented 11 years ago

beetleskin, how did you solved the problem of mask remaining empty?

beetleskin commented 11 years ago

Well I posted the issue here and vincent rabaud solved it ;) Try wg-perception and ecto from source.