Open bumbastic opened 4 years ago
There is no direct feature for this in Open3D, but if you already have a plane, you could use simple numpy operations to filter out the points. Something along the lines
points = np.asarray(pcd.points)
mask = your_plane_test(points) # this function could compute the distance to your plane and threshold it
pcd.points = o3d.utility.Vector3dVector(points[mask])
You could also use segment_plane
to fit a plane to the point cloud (http://www.open3d.org/docs/release/python_api/open3d.geometry.PointCloud.html?highlight=segment#open3d.geometry.PointCloud.segment_plane)
Ok, what I want to do is crop the rgbd image, to avoid filling a huge amount of unwanted depth image data in the Tsdf.
But you can then project the 3D points (with mask values) back to the image and use it to mask your RGB-D image?
That sounds possible. But I don't want to do plane segmentation. I want to filter a undestorted rgbd image so that I get a rgbd image with valid depth only in a specified area of interest
@griegler Hi, can you give some examples of how to project 3D points back to the image?
Hi, I get the images from an Kinect for Azure device. I have ground truth poses for each image. The object I want to scan, is in the forgrunden. The problem is that the object is closer to the device at min. Y positions and further away at max. Y positions. The same goes for the background that I want to filter away. Simply truncating at a general max. depth is not working well in my case. simple depth truncation results in a lot of unwanted background, filling up my Tsdf.
@whyygug you need your camera parameters. If we assume focal length f
and principal point [px, py]
as your params, then you can create the intrinsic matrix K = np.array([(f, 0, px), (0, f, py), (0,0,1)])
. Given your 3D points xyz = np.asarray(pcd.points)
, you can project them by uvd = xyz @ K.T
and the image positions are given by uvd[:,0] / uvd[:,2]
and uvd[:,1] / uvd[:,2]
.
@theNded do you have an idea that could help @bumbastic ?
Hi, thanks. I think I know what you are saying. filter in undestortion loop, thru all pixels. I was hoping for something like depth_image.magic. SetXYZBounds(...,...,...,...,...).
A method to crop the RGBD image to a region of interest. Not just min/max depth.
Could be something like a 2D pixel polygon or rectangle plus a maximum depth plane that can be rotated around x and y axis.
I need it to filter away background with depth that is not perpendicular to the sensor viewing direction. This background 'noise' can of cause be filtered away in the final model, but it is about 90% of the triangles, and it significantly slows down the computation time, and requires a lot of hardware power. The background is moving and is also noise for the registration process. Any suggestions?