drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
508 stars 65 forks source link

How to train in different devices? #123

Closed Wind010321 closed 3 weeks ago

Wind010321 commented 3 weeks ago

Hello, If I have 4 gpu in 1 node, and I want to separately train a model in one gpu.(such as training1 -> gpu0, training2 -> gpu1...) So how do I specify a gpu for training?(such as gpu0, gpu1).

drprojects commented 3 weeks ago

The code is based on the lightning hydra template, which itself builds on PyTorch lightning. Please have a look at the corresponding documentations:

Wind010321 commented 3 weeks ago

Thank you!