strands-project / scitos_robot

Everything related to the STRANDS robot hardware can go in here
0 stars 10 forks source link

OpenNi2 wrapper: change in image format? #39

Closed cdondrup closed 10 years ago

cdondrup commented 10 years ago

I am in the process of migrating the perception people stuff to use the new openni2 wrapper. I encounter a problem which I can't figure out because I am not a vision guy and not very much into openni. I have changed all the relevant topics to listen to the right messages but I don't get any detections. If I switch back to the old openni it works. So my question is: what changed?

Before I used:

Now I use:

Anyone any ideas why that might not work anymore? Has there been a change in the output, format, calibration parameters, etc.?

RaresAmbrus commented 10 years ago

This is strange, as the code in the openni wrapper doesn't modify the images, it just sends them across to the ros image processing pipeline, which is the same as used by the original openni from ros. The calibration parameters are also the same as before.

I haven't used the raw images myself, but the point clouds obtained (both with and without RGB) are correct and I was using them to build the local maps.

I can't think of anything right now except the image_view error wrt the encoding which you posted earlier - I'll take a look at that tomorrow and see if it leads anywhere.

marc-hanheide commented 10 years ago

@RaresAmbrus , did you have a chance to look at this? We'd need to get the person detection up and running again.

RaresAmbrus commented 10 years ago

Sorry, got sidetracked. I'll take a look and get back to you today.

RaresAmbrus commented 10 years ago

After doing a bit of digging, here's the information I found: openni publishes depth images using uint16, aka the 16UC1 type in ros. These images can be converted to regular float values, aka the 32FC1 type in ros. Here's an excerpt from the depth_image_proc ros package (which we are using for further processing of the openni images) :

All nodelets (besides convert_metric) in this package support both standard floating point depth images and OpenNI-specific uint16 depth images. Thus when working with OpenNI cameras (e.g. the Kinect), you can save a few CPU cycles by using the uint16 raw topics instead of the float topics. (http://wiki.ros.org/depth_image_proc#depth_image_proc.2BAC8-convert_metric)

As far as I can tell, image_view only supports the 32FC1 type, which is why some topics cannot be visualized. However, the /camera/depth/image_rect_meters topic (using the convert_metric nodelet from depth_image_proc) does the conversion to float and can be visualized by image_view.

The old openni driver was publishing a very high number of topics, and I only kept the ones that seemed relevant at the time. I apologize for the confusion. @cdondrup - regarding the people tracking, I think you should try the /camera/depth/image_rect_meters topic, as maybe you are expecting float data as input and not uint16.

cdondrup commented 10 years ago

Thank you very much for clarifying some of the confusion @RaresAmbrus. I will try that as soon as I have time.

cdondrup commented 10 years ago

@RaresAmbrus, that did the trick. Thank you once again for your help! Maybe that should go into the readme.

RaresAmbrus commented 10 years ago

Sounds good, I'll update the readme.

cdondrup commented 10 years ago

Just remembered, the camera_info for depth and rgb gives different values. That was the same for both in the old openni. Maybe you can include that as well.

RaresAmbrus commented 10 years ago

Alright, I'll add it.