Pang-Yatian / Point-MAE

[ECCV2022] Masked Autoencoders for Point Cloud Self-supervised Learning
MIT License
448 stars 55 forks source link

How to distribute data parallel training the code #38

Closed Chopper-233 closed 11 months ago

Chopper-233 commented 11 months ago

I'm trying to train your code in my linux system.My command is CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_pec_node=4 main.py --config ./cfgs/pretrain.yaml --launcher pytorch I'm wondering whether it's right or not

Pang-Yatian commented 11 months ago

yes, it is correct.