lijx10 / USIP

USIP: Unsupervised Stable Interest Point Detection from 3D Point Clouds. ICCV 2019
GNU General Public License v3.0
206 stars 29 forks source link

Code Release. #1

Open XuyangBai opened 5 years ago

XuyangBai commented 5 years ago

Hi, I really appreciate your work on point cloud keypoint detection. Is there any estimated date for the release? Thank you very much.

rAm1n commented 5 years ago

@lijx10 Hi there. Are you still planning to release the code?

lijx10 commented 5 years ago

Yes, definitely the code will be released in the near future. We are still waiting for our paper acceptance. We wish to release the code as soon as possible as well.

tchaton commented 5 years ago

Do you have an estimated date for the release ? And do you plan to work on a version 2?

tchaton commented 5 years ago

And also, is your code implemented in Pytorch ?

lijx10 commented 5 years ago

The codes will be released tomorrow. Thanks for your patience.

@tchaton Yes, it is in pytorch. I don't have ideas for version 2 so far.

tchaton commented 5 years ago

@lijx10, Awesome :) I will give it a try then. For a version 2 (not worth a paper) would be to replace the backbone of your model (pointNet) by something state-of-the-art (like ShellNet or DGCNN) and outperforms previous method while being even faster.

tchaton commented 5 years ago

@lijx10, and it would be great to know if this model is able to perform good registration over indoor point cloud like s3dis or scannet.

tchaton commented 5 years ago

And how precise it is in term of meters, centimeters, mm for real case applications

lijx10 commented 5 years ago

@tchaton The codes are online now. Yes, replacing the backbone is an option, but we are not prioritizing it since our major contribution is the idea of unsupersied keypoint detection. Please feel free the give it a try and share the results with us. Thanks!

Our method works for general point clouds, including indoor scans. However, the results for single frame RGBD (sceneNN, redwood, 3dmatch) is less appealing (but we still out-perform state-of-the-art), compared to the superior results for outdoor / object model. But we haven't tried with large scale indoor scenes yet.

tchaton commented 5 years ago

@lijx10 Thanks you very much for both releasing the code and your answer. I think this could be interesting for me, but from your paper, it seems that the point cloud are overlapping .

In our case, we are building indoors scans and we try to do registration using AI at scale. Potentially having hundreds of sub point cloud where sometimes the overlap is almost none.

Here could be an idea for a version 2: If possible, your model could have an attention mechanisms in 3D space allowing him to look for keypoints pairs in sub part of the space.

But doing so, it will allow the model to perform keypoints maching even if the overlap between point cloud is small.

It is highly recommended to have at least 60 % overlap, but in your case (after asking scanning teams), it is closer to 30 % in average.

lijx10 commented 5 years ago

Sorry but I am confused, how can we perform registration if there is no overlap?

tchaton commented 5 years ago

@lijx10 In real case, the overlap is usually between 30% to 60 %. The open source datasets are usually extremely far from real world data, and those are no exceptions. Therefore, the real question is how can we perform accurate registration with such a small coverage. Currently, people provide at least a minimum of 3 keypoint pairs which my help to define the search zone used for registration. They give an estimated transformation matrix and the idea is to refine in using the overlap description. Ideally, it will be great to have a model able to find this potential zone of overlap (assuming at least 30 % overlap) and perform registration by giving keypoint pairs. Your research would become extremely valuable for the entire community if it was providing a study of how it performs with small overlapped point cloud + (potential improvement for this real world use case).

Best, Thomas Chaton

XuyangBai commented 5 years ago

Hi @lijx10 Thanks for your sharing. Are you also going to share the evaluation code?

lijx10 commented 5 years ago

@XuyangBai Yes, the evaluation codes will be released in a few days.

lijx10 commented 5 years ago

@tchaton Currently the USIP paper is only about feature / keypoint detection and description. Point cloud registration is performed with standard RANSAC pipeline. In practice, the RANSAC pipeline is able to handle small overlaps, as long as the feature descriptor is good enough.

We have experiments with Redwood dataset registration, where the overlap is as low as 30%. Our registration recall is 76.4%, which out-performs state-of-the-art as well. However, it is not a huge improvement. As shown in the USIP paper, we didn't achieve very good results on RGBD scans. For my experiments, RGBD scans are small scale, and contain lots of flat surface like walls, floors, etc., therefore, feature descriptor matters more than feature detector.