Closed nonlinear1 closed 4 years ago
In your remap
you are using INTER_LINEAR
, which will cause weird artifacts at depth discontinuity. You have to use INTER_NEAREST
when you warp or interpolated depth, or if you care about the depth quality you can do some sort of depth discontinuity detection where at non depth discontinuity use INTER_LINEAR
and depth discontinuity use INTER_NEAREST
.
I will try thank you very much
@jasjuang Thank you for your help, you are right!
@nonlinear1 @jasjuang Commenting here because my issue seems to be identical. Please let me know if this is bad form and I should start a fresh issue instead. I'm post processing k4a images in python and running into the same error. I'm reading into python:
24 bit, 3 channel color image, uint8 dtype
16 bit 1 channel depth image, uint16 dtype, which is already transformed into color reference
the Kinect camera's intrinsic and distortion data
The images are reading in correctly, and generating a point cloud from the raw images works perfectly fine. They I apply the following code:
h, w = color_im.shape[:2]
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(matrix_in, dist_vector, (w, h), 1, (w, h))
x, y, w, h = roi
mapx, mapy = cv2.initUndistortRectifyMap(matrix_in, dist_vector, None, newcameramtx, (w, h), 5)
depth_undistorted = cv2.remap(depth_im, mapx, mapy, interpolation=cv2.INTER_NEAREST)
color_undistorted = cv2.remap(color_im, mapx, mapy, interpolation=cv2.INTER_LINEAR)
color_cropped = color_undistorted[y:y+h, x:x+w]
depth_cropped = depth_undistorted[y:y+h, x:x+w]
color_converted = o3d.geometry.Image(color_cropped)
depth_converted = o3d.geometry.Image(depth_cropped)
combined_im = o3d.geometry.RGBDImage.create_from_color_and_depth(color_converted, depth_converted, depth_scale=1000.0, depth_trunc=1.0, convert_rgb_to_intensity=False)
combined_pcd = o3d.geometry.PointCloud.create_from_rgbd_image(combined_im, matrix_in)
The color image comes out fine, and the depth image is hard to check since it's 16 bit. Here is the raw color image (left) and the undistorted color image (right)
Now, here's a comparison of the "raw" point cloud vs the undistorted point cloud. Obviously something gone wrong, and I get the same "pyramidal" shape to the point cloud that @nonlinear1 seemed to encounter:
As you can see, I'm already specifying NEAREST interpolation for depth undistortion, and still running into the problem. Any other ideas?
Edit: Solved. My process was fine, but apparently:
"Open3d has a bug where it does not always handle memory correctly when it is passed a "view" into a numpy array (created by the index slicing) instead of a real array of contiguous memory. It can cause memory to get shuffled around which could explain the wild values"
Fixed with:
…
color_copy = copy.deepcopy(color_cropped)
depth_copy = copy.deepcopy(depth_cropped)
color_converted = o3d.geometry.Image(color_copy)
depth_converted = o3d.geometry.Image(depth_copy)
...
Hopefully anyone else using open3d for PCD generation can find this useful.
I add undistortion part of color image and depth image to get undistort color image and depth image. And, I use the undistort color image and depth to make a pointcloud file, which is pcd format. But the pointcloud file has many scattering points along the ray, which is emitted by depth camera. I am confusing this phenomenon. the pointcloud srceen shot is as follow: and the following is my codes:
include
include <k4a/k4a.h>
include <opencv2/core/core.hpp>
include <opencv2/highgui/highgui.hpp>
include <opencv2/imgproc/imgproc.hpp>
include
include <opencv2/calib3d/calib3d.hpp>
using namespace std; using namespace cv;
template Mat create_mat_from_buffer(T data, int width, int height, int channels = 1)
{
Mat mat(height, width, CV_MAKETYPE(DataType::type, channels));
memcpy(mat.data, data, width height channels sizeof(T));
return mat;
}
static string get_serial(k4a_device_t device) { size_t serial_number_length = 0;
}
int main(int argc, char **argv) {
}
Could anyone know why? Thanks a lot