drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
601 stars 75 forks source link

Problem with processing of the pointcloud #11

Closed iacopo97 closed 1 year ago

iacopo97 commented 1 year ago

Good morning, I have added another dataset, composed only of one pointcloud, which contains 1000k points. After the preprocessing step the number of points is reduced to 79000, which is less than 10% of the original number of points. Have you got any idea about how to solve this problem or which function I need to modify? Sorry to bother you, thank you for your work

drprojects commented 1 year ago

Hi, have you had a look at #9 ?

Just for clarity, I am assuming you understand what the level-0 and level-1 Data objects in the NAG structure are. If not, I invite you to have a look at the data structures documentation and at the code of the respective objects, which is extensively commented.

That being said, I am guessing the level-0 Data object in your NAG has much fewer points than your raw point cloud. If so, it is the result of the voxelization step called at preprocessing time using GridSampling3D. If your data is extremely dense, this step will remove a lot of points. You can play with the value of datamodule.voxel in your configs if you want to change the voxel size. Importantly, you will want to choose the value of the voxel based on the size of the objects you are looking for in the scene, not based on the cloud size you want. As an illustration, the raw S3DIS dataset is extremely dense but people usually process it at 2cm-3cm voxel resolution, which is sufficient for semantic segmentation of the dataset classes.

iacopo97 commented 1 year ago

Good morning, thank you very much for your kind and exhaustive answer, I will try what you have suggested.