zhangxy0517 / 3D-Registration-with-Maximal-Cliques

Source code of CVPR 2023 paper
MIT License
446 stars 52 forks source link

Three questions regarding the code #3

Closed qiaozhijian closed 1 year ago

qiaozhijian commented 1 year ago

Thank you for your excellent work! I have three questions regarding the code.

Firstly, in the code located at this link: https://github.com/zhangxy0517/3D-Registration-with-Maximal-Cliques/blob/f85cc3e82822f93a5a49e8487c7831efb2ff68da/Linux/registration.cpp#L523, it appears that the edge weights are being added rather than multiplied as described in the associated paper. If this is the case, I am curious to know the reason for this change.

Secondly, in the code located at this link: https://github.com/zhangxy0517/3D-Registration-with-Maximal-Cliques/blob/f85cc3e82822f93a5a49e8487c7831efb2ff68da/Linux/registration.cpp#L1160, it seems that a histogram is used to organize the scores, and the metric https://github.com/zhangxy0517/3D-Registration-with-Maximal-Cliques/blob/f85cc3e82822f93a5a49e8487c7831efb2ff68da/Linux/funcs.cpp#L551 is used to choose the threshold. Is this metric related to any statistical theory? Have you considered using the median instead?

Thirdly, in the code located at this link: https://github.com/zhangxy0517/3D-Registration-with-Maximal-Cliques/blob/f85cc3e82822f93a5a49e8487c7831efb2ff68da/Linux/registration.cpp#L664-L676, I am unsure about the purpose of lines 664-673. Is it possible that the final value of f * max(OTSU, total_factor) is very similar to cluster_factor[49].score? Would it be possible to use cluster_factor[49].score directly instead? Additionally, this section of code appears to only retain the top 50 vertices, which may result in a very sparse compatibility graph. Could you please provide some insight into the reasoning behind this design choice?

zhangxy0517 commented 1 year ago

Your questions are about the dynamic adjustment of the compatibility graph scale. Let's suppose that the inlier ratio of the input correspondence set is high, then the compatibility graph will be dense, making searching for maximal cliques in the whole graph more time-consuming. Based on this, we introduce the clustering coefficient to measure the density of the compatibility graph. Please refer to our previous work mutual voting for ranking 3D correspondences. If the coefficient is large, we choose to reduce the size of the compatibility graph, that is, only retain the nodes with higher weights and the edges formed by these nodes. Note that the mechanism is not mentioned in our paper, because: 1) it is not our main innovation point; 2) the operation of reducing the graph size will only take effect when the inlier ratio is high.

qiaozhijian commented 1 year ago

Great idea. I think I should take the time to read this article carefully.