Closed daisyranc closed 2 years ago
Also, how to get the final registration results?
Thank you so much!
Hello, thanks for your interest in our work!
Of course you can! Please check out our dataset class at pytorch/dataset/flow_dataset.py
to see how data is loaded. Essentially you need to pack each datum into an npz file containing the following keys: pcs
, flows
, and flow_masks
. Here pcs
are (K,N,3) array (K is the view number and N is the point number), flows
are (K, K, N, 3) array containing all pairwise flow vectors, and flow_masks
are (K, K, N) boolean array indicating the occlusion state for each flow pair.
Do you mean the visualization code? For that, you can add the --visualize
flag to the pytorch/evaluate.py
script. It utilizes Open3D
viewer to show you the registration result.
Best.
@heiwang1997 Thank you for your explanation! I have solved the second question with your nice visualization code! But for the first question, I have checked all the datasets you provided. 'pcs' and 'lows' are two indispensable elements. 'pcs' is easy to generate by myself. How do generate 'flows' and 'flows_masks' automatically?
That's great to hear! The flows and flow_masks are ground-truth supervision. If you just want to test your own data using the pretrained model, simply exclude them from the npz file. If you still want to train/finetune using your own data, you could try out the self-supervised config at pytorch/configs/mpc-cape/*_self.yaml
.
@heiwang1997 Thank you for your reply! Therefore: 1) If I only want to test on my own data (for example, using mpc-dt4d checkpoints), I just need to put two views into one npz file to do the test. Here, should I do some preprocessing? Should I make the point clouds of the same size? 2) If I want to retrain the model, I need to prepare point clouds as 'pcs', 'flows', and 'flow_masks' (as ground truth) and zip them to npz files. Then training the networks by ### python train.py configs/mpc-cape/train_desc_self.yaml and ### python train.py configs/mpc-cape/train_basis_self.yaml. Are my steps like this correct? Also, are there any additional requirements or preprocessing for the data?
@daisyranc Hi!
You can use point cloud with different sizes because we use a voxelized network backbone -- simply use a list of two PC arrays. For preprocessing, it would be better if you centralize your points and scale them to roughly unit size -- this ensures the voxel size of the network backbone fits your input.
If you have ground-truth labels, then you'd better use the fully-supervised configs because that will provide better results (i.e. ### python train.py configs/mpc-cape/train_desc.yaml and ### python train.py configs/mpc-cape/train_basis.yaml). Otherwise (if you don't have ground-truth labels), your commands are correct.
Best.
@heiwang1997 Thank you so much for your patience. The last question for me is that: the point clouds from different views or different poses should be put into one npz file as 'pcs'. If they have different point sizes, how to generate the npz file and testing?
I think it's better to add some guidelines about how to use readers' own datasets to test, such as how to generate a proper data format to use your test_dataloader.
Hi, in that case you can simply use a python list of the PC arrays as 'pcs'. They are not necessarily packed into one numpy array.
Thank you so much for your suggestion, and we will try our best to add those instructions recently.
@heiwang1997 This is excellent work! Thank you for being so patient in answering!
Thanks for your excellent work and shared code!!!
I have trained and tested on the dataset you provided! However, how do training and testing on my own dataset? Should I do some preprocessing?