i3drobotics / i3dr_deimos-ros

ROS driver for the I3DR Deimos stereo camera
http://i3drobotics.com/deimos.html
2 stars 2 forks source link

No data on /stereo/points2 #3

Closed void-robotics closed 6 years ago

void-robotics commented 6 years ago

Thanks for creating this ROS driver; it's working fairly well.

I'm getting data from other topics (like /stereo/right/image_raw), but no data is coming from /stereo/points2.

How do I get data from this? Also my goal is to poll any point in the image for a depth; so if there's an easier/other way to do this, I could also do that.

Thanks, Nathan

jveitchmichaelis commented 6 years ago

Have you calibrated the camera? Tara comes with an onboard calibration, but it's always worth trying again. The Deimos package expects that you've performed your own calibration rather than using the onboard one. There are some details in the readme, or you can look at: http://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration. You should also see the rectified image topics if the camera is calibrated (e.g. /stereo/left/image_rect)

If you want to poll a specific location, you're better off processing the disparity map directly and using OpenCV to reproject those specific points to 3D. You'll also need the camera calibration for that.

There are other tools in ROS e.g. http://wiki.ros.org/stereo_image_proc which can display the disparity map to see if things look sensible.

void-robotics commented 6 years ago

Ok; so /stereo/points2 is empty by default.

If I want to poll a specific location, would the best approach be to use OpenCV's reprojectImageTo3D?

Why would I need to reproject if stereo_image_proc outputs points2, which I believe is xyz values for every point in the image?

jveitchmichaelis commented 6 years ago

If it's still empty, check that the calibration is good/exists, that's likely to be the problem.

Looks like it is indeed structured. This often isn't the case when working with 3D data for formats like e.g. PLY which is usually just a list of XYZRGB points. In this case, you might find that performing 2D operations on a disparity map is easier than on a point cloud. And yes, reprojectImageTo3D should work.

However, on your last point, you almost certainly won't get a 3D location for every point in the image due to occlusions and overlap between the cameras. This means you'll get some parts of the image with no calculated disparity values - these points are not reprojected as there would be no physical meaning in assigning them a 3D location. In ROS these points are set to NaNs.

void-robotics commented 6 years ago

Somehow the points2 data just appeared; so that problem is solved.

Although getting a 3D location for every point is impossible, I'm still confused how reprojectImageTo3D is preferred over polling from the point cloud?

jveitchmichaelis commented 6 years ago

It's not better in this case - use the point cloud.