Open WangYixuan12 opened 8 months ago
To better assist you, please provide the specific version of the SDK and firmware version; we will arrange to replicate the issue
I am using pyorbbecsdk at https://github.com/orbbec/pyorbbecsdk/tree/90cf69688e11f08ecfbbd564d10f7a75d9adc89b. I am not sure where I can check the firmware version, but it should be the latest version.
We have updated the version of PyOrbbecSDK and suggest using the latest release version. For specifications on absolute accuracy, please refer to the specification documentation. We suspect there may be issues with the calibration of multiple cameras. https://www.orbbec.com/products/tof-camera/femto-mega/
If I understand #39 correctly, this could be due to the point clouds not being undistorted. If the point clouds are not taking lens distortion into account, then they will never line up perfectly.
Thank you, @tim-depthkit, for the suggestion! That could be a potential reason. I project the rgbd image to point cloud using simple pinhole camera. Do you mind sharing how to take the distortion coefficient into consideration and undistorted it?
Hi @xcy2011sky, I am using the latest PyOrbbecSDK. The extrinsic calibration should be right. Otherwise, the calibration board will be off. Also, a similar extrinsic calibration pipeline works well on RealSenses. Do you suggest that Femto Bolt's absolute depth value is inaccurate under these circumstances? Specifically, I have four cameras looking at the table, and they are around 0.5m from the table. This setting is typical in robotics. If the camera's absolute depth value is not accurate in these settings, do you suggest that we should use cameras from other brands like RealSense?
In my experience the depth is quite accurate on these sensors. You'll need to take distortion coefficients into account during un-projection of the depth map, as well as texture projection to color the points. You can see detailed information about how to use distortion coefficients here: https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#details
Thank you for the suggestions! I will take a look at it
OrbbecSDK 1.9.3 (https://github.com/orbbec/OrbbecSDK/releases/tag/v1.9.3 ) added a general utility class (CoordinateTransformHelper), which supports point transformation between different coordinate systems, D2C conversion, and distortion correction for depth point clouds and RGBD point clouds.
Here is an example of how to invoke the functions of the CoordinateTransformHelper class (examples\cpp\Sample-Transformation)
However, such an interface has not been wrapped in Python yet
@WangYixuan12 The sensor's accuracy is quite impressive; it surpasses Realsense in terms of precision and the quality of the point cloud. However, I'm not familiar with your specific application scenario for reconstruction. Nevertheless, it's worth noting that both the Bolt/Mega products have already been successfully deployed by a number of customers;
Regarding the issue of camera distortion, you can use the k4a wrapper's API interface. This makes the process much more convenient. https://github.com/orbbec/OrbbecSDK-K4A-Wrapper
Thank you for the information! I mainly use Python as my development language, but K4A Wrapper seems to use C++ mainly. I am wondering whether there are any easy ways to use K4A Wrapper using Python
@WangYixuan12 You can visit https://github.com/orbbec/pyKinectAzure
OrbbecSDK 1.9.3 (https://github.com/orbbec/OrbbecSDK/releases/tag/v1.9.3 ) added a general utility class (CoordinateTransformHelper), which supports point transformation between different coordinate systems, D2C conversion, and distortion correction for depth point clouds and RGBD point clouds.
Hello @zhonghong322 , does this helper method works also for raw depth frame data? If not, what's the best path to undistort the depth frame based on orbbec camera parameters? Thank you
OrbbecSDK 1.9.3 (https://github.com/orbbec/OrbbecSDK/releases/tag/v1.9.3 ) added a general utility class (CoordinateTransformHelper), which supports point transformation between different coordinate systems, D2C conversion, and distortion correction for depth point clouds and RGBD point clouds.
Hello @zhonghong322 , does this helper method works also for raw depth frame data? If not, what's the best path to undistort the depth frame based on orbbec camera parameters? Thank you
We currently do not support distortion correction directly on raw depth data."
Hi, we recently purchased 4 Femto Bolt, and we calibrated their extrinsic and merged the point cloud from each camera. We successfully merged the point cloud for the calibration board. However, the cup on the table is quite off from different camera views. It seems that the absolute depth value is not accurate. Could you help with this problem?