drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
508 stars 65 forks source link

how to get the superpoints of each point #120

Closed zeyu659 closed 3 weeks ago

zeyu659 commented 3 weeks ago

Hello @drprojects,

Thank you for your wonderful work, SPT! My understanding of your job is to first calculate the input point cloud as superpoints, and then segment it based on the superpoints. Currently, my concern is how to calculate and extract the superpoint labels for each point cloud in my dataset. A more detailed explanation is that if there are 100 point clouds in a scene, I need to extract the 100 corresponding superpoint_labels generated by them and save them as ‘.npy’ files. Can you use scannet data as an example to illustrate?

drprojects commented 3 weeks ago

Have a look at the data structures documentation. What you are looking for is:

# Indices of parent superpoints for all points in P0
nag[0].super_index

PS: Note that, as explained in the README and demonstrated in the demo.ipynb notebook, nag[0] holds the $P_0$ points, which, are usually a voxelized version of your raw, full-resolution, input point cloud. If you happen to be interested in recovering full-resolution attributes. See the provided documentation, notebooks and already-existing issues for full-resolution.

PS2: I will be giving a live tutorial on SPT next week, you might want to attend: https://www.linkedin.com/feed/update/urn:li:activity:7209130541625790465

zeyu659 commented 3 weeks ago

Thank you for your reply and guidance, looking forward to your live tutorials!