drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
567 stars 74 forks source link

Speeding up the NAG generation #127

Closed ecustWallace closed 3 months ago

ecustWallace commented 3 months ago

Greetings,

Firstly, thank you so much for this awesome repo and papers! They are all very fantastic and exciting works!

I just wanna ask if there are some general tips of speeding up the NAG generation from the raw data, like what are the parameters in the config file I should pay attention to, or any other tricks?

Thank you!

drprojects commented 3 months ago

Hi @ecustWallace thanks for your interest in the project !

As you can imagine, the parametrization for the preprocessing we use for the implemented datasets is the best we could find to strike a balance between computation speed and downstream performance. So trying to make it faster is in itself a little research project :wink:

The parts that usually take the most time in the preprocessing are, in decreasing order:

That being said, here are the parameters in the configs/datamodule/semantic/your_datamodule.yaml that impact the preprocessing speed the most. Up to you to play with those, see if it helps your downstream task, and to profile the preprocessing to check which part takes too long for you:

Again, up to you to investigate this on your specific use case !