zivid / zivid-python

Official Python package for Zivid 3D cameras
BSD 3-Clause "New" or "Revised" License
41 stars 14 forks source link

Recontruct Zivid frame from numpy ndarray #72

Open matthijsramlab opened 4 years ago

matthijsramlab commented 4 years ago

I need to convert a numpy.ndarray to a Zivid frame. The reason for this is that I am using ROS (Zivid Driver), which sends point clouds in the not Zivid format (Pointcloud2D message) and want to preform Zivid eye in hand calibration which requires a zivid pointcloud object.

I am able to convert a Pointcloud2D message to numpy.ndarray, which has the same result as if you would run the Zivid Pointcloud.to_array function. However, doing it the other way around is not possible at the moment. Or at least I am not able to do it....

Is there a possibility to implement this. In the mean time is there a workaround I can use?

eskaur commented 4 years ago

There is currently only two ways to create a Zivid::PointCloud:

To me this sounds like a deficiency in the ROS wrapper, i.e. it should be possible to do HandEye calibration in ROS with whatever data types ROS provides. @apartridge will likely know more.

matthijsramlab commented 4 years ago

it should be possible to do HandEye calibration in ROS with whatever data types ROS provides

Yes, I agree maybe a solution would to let the detect_feature_points function except a np.ndarray instead of the zivid.Pointcloud?

apartridge commented 4 years ago

Unfortunately the Zivid ROS driver itself does not have hand-eye calibration. This would be some work to do and is not on the short-term roadmap.

In this case it sounds like @matthijsramlab is using both the ROS and Python API, and wants to do hand-eye calibration via the Python API, using data captured via ROS. That is not possible currently. To be able to do it, we would need to let the user make a PointCloud based on their own data. This is possible in C++ in API 1.x, but not in API 2.x. That feature is not available in the Python wrapper in API 1.x anyway. This is also not on our short-term roadmap as-of-now.

As a workaround, you could try to do hand-eye calibration using just Python (not involving ROS driver at all), and then disconnect from camera in python, and re-connect to it in the ROS driver, after the calibration is completed. It is a little messy, but probably doable.

A more hacky workaround is to make your own ZDF file with the point cloud data and load that from disk. Keep in mind that the ZDF format may change without notice in the future (and is changing some from API 1.x to 2.x), so it's not a good long-term solution.

eskaur commented 4 years ago

Note that calibrateEyeInHand() does not require a PointCloud as input, it merely requires DetectionResults and Poses. As long as the feature point detection (what normally happens in detectFeaturePoints()) can be done in Python with OpenCV, we merely need to add a way for the user to create their own DetectionResult based on a list of 3D points. This is a much lower hanging fruit in terms of SDK changes.

matthijsramlab commented 4 years ago

In this case it sounds like @matthijsramlab is using both the ROS and Python API, and wants to do hand-eye calibration via the Python API, using data captured via ROS.

This is indeed correct

Note that calibrateEyeInHand() does not require a PointCloud as input, it merely requires DetectionResults and Poses. As long as the feature point detection (what normally happens in detectFeaturePoints()) can be done in Python with OpenCV, we merely need to add a way for the user to create their own DetectionResult based on a list of 3D points. This is a much lower hanging fruit in terms of SDK changes.

This would be a nice solution

matthijsramlab commented 4 years ago

Any idea when this will be implemented?

eskaur commented 4 years ago

This is currently not on our shortlist and we are fully booked for a while, sorry. I can bring it up internally to see what we can do to change that.

matthijsramlab commented 4 years ago

Yes that would be great. There must be more users that are using ROS to control the Zivid (and need a hand in eye calibration).