drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
546 stars 71 forks source link

Adjusting the partition hyperparameters `regularization`, `cutoff`, `spatial_weight` #50

Closed wy9933 closed 8 months ago

wy9933 commented 8 months ago

Hello,

I'm using the code to process my own data, and I visualized the results of partition, I noticed that each superpoint in the first level is very small. I want to adjust the hyperparameters to obtain larger superpoints in the first stage. How can I adjust the parameters regularization, cutoff, spatial_weight to achieve this?

Also, I would like to understand the functions of these three parameters in the partition process. Could you briefly explain, or guide me to where I can find relevant documentation regarding their functions?

Thanks a lot! :smile:

wy9933 commented 8 months ago

By the way, if I want to get the complete scene's point-wise predicted and real label, what should I do?

drprojects commented 8 months ago

Indeed, the first level of partition is supposed to produce small superpoints. We found that having roughly 30-50 points per superpoint in the first level was a good trade-off for a partition that is both semantically pure and that simplifies the scene. As explained in our paper, this is the general rule of thumb we used to parameterize the partition on our datasets, butthis is up to you and your specific problem.

You can have a look at the code for CutPursuitPartition, for how to use the parameters, it is fairly commented. You can also play with voxel too.

By the way, if I want to get the complete scene's point-wise predicted and real label, what should I do?

See my replies in #9 and #44. I will work on it this month.