Open matthijsramlab opened 4 years ago
There is currently only two ways to create a Zivid::PointCloud
:
Zivid::Frame
is merely a PointCloud
plus some meta-data.)To me this sounds like a deficiency in the ROS wrapper, i.e. it should be possible to do HandEye calibration in ROS with whatever data types ROS provides. @apartridge will likely know more.
it should be possible to do HandEye calibration in ROS with whatever data types ROS provides
Yes, I agree maybe a solution would to let the detect_feature_points function except a np.ndarray instead of the zivid.Pointcloud?
Unfortunately the Zivid ROS driver itself does not have hand-eye calibration. This would be some work to do and is not on the short-term roadmap.
In this case it sounds like @matthijsramlab is using both the ROS and Python API, and wants to do hand-eye calibration via the Python API, using data captured via ROS. That is not possible currently. To be able to do it, we would need to let the user make a PointCloud based on their own data. This is possible in C++ in API 1.x, but not in API 2.x. That feature is not available in the Python wrapper in API 1.x anyway. This is also not on our short-term roadmap as-of-now.
As a workaround, you could try to do hand-eye calibration using just Python (not involving ROS driver at all), and then disconnect from camera in python, and re-connect to it in the ROS driver, after the calibration is completed. It is a little messy, but probably doable.
A more hacky workaround is to make your own ZDF file with the point cloud data and load that from disk. Keep in mind that the ZDF format may change without notice in the future (and is changing some from API 1.x to 2.x), so it's not a good long-term solution.
Note that calibrateEyeInHand()
does not require a PointCloud as input, it merely requires DetectionResults and Poses. As long as the feature point detection (what normally happens in detectFeaturePoints()
) can be done in Python with OpenCV, we merely need to add a way for the user to create their own DetectionResult based on a list of 3D points. This is a much lower hanging fruit in terms of SDK changes.
In this case it sounds like @matthijsramlab is using both the ROS and Python API, and wants to do hand-eye calibration via the Python API, using data captured via ROS.
This is indeed correct
Note that
calibrateEyeInHand()
does not require a PointCloud as input, it merely requires DetectionResults and Poses. As long as the feature point detection (what normally happens indetectFeaturePoints()
) can be done in Python with OpenCV, we merely need to add a way for the user to create their own DetectionResult based on a list of 3D points. This is a much lower hanging fruit in terms of SDK changes.
This would be a nice solution
Any idea when this will be implemented?
This is currently not on our shortlist and we are fully booked for a while, sorry. I can bring it up internally to see what we can do to change that.
Yes that would be great. There must be more users that are using ROS to control the Zivid (and need a hand in eye calibration).
I need to convert a numpy.ndarray to a Zivid frame. The reason for this is that I am using ROS (Zivid Driver), which sends point clouds in the not Zivid format (Pointcloud2D message) and want to preform Zivid eye in hand calibration which requires a zivid pointcloud object.
I am able to convert a Pointcloud2D message to numpy.ndarray, which has the same result as if you would run the Zivid Pointcloud.to_array function. However, doing it the other way around is not possible at the moment. Or at least I am not able to do it....
Is there a possibility to implement this. In the mean time is there a workaround I can use?