Open LemonPi opened 2 years ago
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
@LemonPi This is an interesting question. Is there a known or obvious way to incorporate the normals in the algorithm?
@bottler Yes there are a couple, including the NICP (with implementation https://laempy.github.io/pyoints/tutorials/icp.html#NICP-algorithm; J. Serafin and G. Grisetti (2014): ‘NICP: Dense normal based pointcloud registration’, International Conference on Intelligent Robots and Systems (IROS): 742-749.).
There are also more traditional methods from the Point Cloud Library (PCL) that does registration with point-wise features that can include surface normals: https://pcl.readthedocs.io/projects/tutorials/en/master/how_features_work.html#how-3d-features-work
In my work, I need to use ICP quite extensively and also together with known freepoints. Even though ICP is about point cloud alignment, I could probably introduce a cost function that evaluates how much a transform violates the known freespace constraint. I haven't dug too deep into pytorch3d's ICP code, but if it solves the alignment problem in closed form such as with Procrustes, it'd probably have to change to an optimization approach to take the external cost function into account. One possibility is with gradient descent ICP https://arxiv.org/pdf/1907.09133.pdf (instead of solving the procrustes in closed form, treat it as a cost function and do gradient descent on it using batches of correspondences).
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
I am currently investigating this problem and seeing how well existing solutions handle some related conditions (3D NDT and SDF to SDF registration methods), as well as developing my own. Could be a while before I get good results though.
https://youtu.be/h3zAA3p2A-I preliminary results on YCB mustard bottle registration with ICP using 5 known surface points (shown as black crosses, with a blue line coming out of them indicating their normal). The experiment produces a batch of 30 pose estimates (initializing from random poses), gradually increasing the number of known free points (magenta). The method refines the pose estimates by penalizing those that are inconsistent with the known freespace.
I would be strongly interested in your results.
As far as I understand it, we would have to have a similar function as corresponding_points_alignment
, but more like corresponding_points_normals_alignment
. That way we could compute the well researched point to normal ICP estimation.
Maybe, something similar to Open3D implementation, but with the possibility to process batched data.
My current primary goal is to incorporate freespace information more so than surface normal information. Results are currently quite good; I'm currently comparing against more freespace-aware baselines and also trying to make the optimization method more sophisticated
Hey guys, I just published an RSS paper on this problem: https://www.roboticsproceedings.org/rss19/p077.pdf and a standalone library package (experiments are in a separate package) published to pypi and is pip installable
pip install chsel
https://github.com/UM-ARM-Lab/chsel (pronounced chisel)
In comparison to ICP implementation, this method:
I'd be happy to look into/helping incorporating this as an option in pytorch3d
@GregorKobsik @bottler pinging you guys since you expressed interest in this problem
❓ Questions on how to use PyTorch3D
Does PyTorch3D support ICP variants that use oriented points (points with normals associated to each one)? In
pytorch3d.ops
there isestimate_pointcloud_normals
,estimate_pointcloud_local_coord_frames
, andadd_points_features_to_volume_densities_features
, but there doesn't seem to be any algorithm inpytorch3d.ops
that uses these features. My use case is that I have an oriented point cloud sampled from a mesh (so can have any density) that I'd like to register another very sparse oriented point cloud to. How do I best do this in Pytorch3D?I am currently just using
iterative_closest_point
that takes in only point information, and it returns a lot of transforms that intersect with known freespace, and does not use the surface normal information. Post-processing evaluation of each transform's plausibility using intersection with freespace and surface normal alignment is hacky and leads to inefficient sampling.