Open wyaimyj opened 1 year ago
Hi, @wyaimyj , thanks for your interest in our work!
If you don't have the ground-truth transformation between two point clouds, maybe you can choose an unsupervised/self-supervised training strategy such as USIP and UDPReg. Additionally, the ground-truth transformation between a pair of fragments can be manually generated using the Meshlab software. If you're interested in the SpinNet series, we recommend you to take a look at BUFFER.
Best, Sheng
Hi, @wyaimyj , thanks for your interest in our work!
If you don't have the ground-truth transformation between two point clouds, maybe you can choose an unsupervised/self-supervised training strategy such as USIP and UDPReg. Additionally, the ground-truth transformation between a pair of fragments can be manually generated using the Meshlab software. If you're interested in the SpinNet series, we recommend you to take a look at BUFFER.
Best, Sheng
Thank you for your response. I would also like to inquire how the rotation and translation matrices corresponding to each point cloud, as used in datasets like 3dMatch, are obtained?
Hello, I would like to use your model to train on my own dataset. My dataset consists of point clouds from different viewpoints of a statue. However, I do not have the corresponding rotation and translation matrices between the point clouds pairwise. How can I address this issue? Thank you!