drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
545 stars 71 forks source link

DDP/DP training #71

Closed Charlie839242 closed 6 months ago

Charlie839242 commented 6 months ago

Hi Damien,

Thanks for your great paper and awesome project! I am not very sure about one point: simply changing the gpu.yaml to ddp.yaml will make the model train in DDP model, is this correct? And is synchronization of batchnorm supported in this case? Thanks for you reply!

drprojects commented 6 months ago

Hi @Charlie839242, thanks for your interest in the project !

That is correct, using the ddp.yaml config should allow you to do multi-gpu training. For more information, please refer to the documentation from the lightning-hydra template and lightning on which this project is based.

Two remarks though:

PS: if you like or use this project, please give it a ⭐, it means a lot to us !

Charlie839242 commented 6 months ago

Thank a lot for your quick reply! My question is solved.