Open JIAOJIAYUASD opened 2 years ago
submitit is used for "Multi-node Multi-gpu distributed training"
Allright, maybe the submitit script is for some Server cluster different from I am used,I am solve the problem use the main_pretrain.py script with the torch.distributed.launch
. And thanks for answer my question.
Allright, maybe the submitit script is for some Server cluster different from I am used,I am solve the problem use the main_pretrain.py script with the
torch.distributed.launch
. And thanks for answer my question.
sure,submitit is used for Slurm cluster. and torch.distributed.launch is another common way.
Here is an example, in case you were looking for that sort of thing (2 gpus used here):
python -m torch.distributed.launch --nproc_per_node=2 main_pretrain.py --batch_size 16 \
--world_size 2 \
--accum_iter 4
--model mae_vit_base_patch16 \
--norm_pix_loss \
--mask_ratio 0.75 \
--epochs 800 \
--warmup_epochs 40 \
--blr 1.5e-4 --weight_decay 0.05
PS: works with pytorch 1.8.X
What changes should I make to the command above if I have two nodes with 8 gpus each and I also want to keep the batch size the same?
@kalyani7195
Node 1:
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 main_pretrain.py --batch_size 16 \
--world_size 2 \
--accum_iter 4
--model mae_vit_base_patch16 \
--norm_pix_loss \
--mask_ratio 0.75 \
--epochs 800 \
--warmup_epochs 40 \
--blr 1.5e-4 --weight_decay 0.05
Node 2:
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1 main_pretrain.py --batch_size 16 \
--world_size 2 \
--accum_iter 4
--model mae_vit_base_patch16 \
--norm_pix_loss \
--mask_ratio 0.75 \
--epochs 800 \
--warmup_epochs 40 \
--blr 1.5e-4 --weight_decay 0.05
Modify batch size according to your hardware.
PS: For pytorch >= 1.9 : replace python -m torch.distributed.launch
with torchrun
PS2: I haven't tried multi-node multi GPU training myself.
Here is an example, in case you were looking for that sort of thing (2 gpus used here):
python -m torch.distributed.launch --nproc_per_node=2 main_pretrain.py --batch_size 16 \ --world_size 2 \ --accum_iter 4 --model mae_vit_base_patch16 \ --norm_pix_loss \ --mask_ratio 0.75 \ --epochs 800 \ --warmup_epochs 40 \ --blr 1.5e-4 --weight_decay 0.05
PS: works with pytorch 1.8.X
Thanks for your sharing!
I want to raise another question about the different performance (the speed of loss decreasing) between this two different implementation.
As what I have tested on the completely same model, original implementation (based on submitit )performs much better than another implementation w.r.t the speed of loss decreasing in the case of Single V100 and Single node.
Maybe you can also test this interesting thing. Thanks a lot.
Hi, May I ask how to do the pre-training on single GPU?
hello, I wan't to ask how to train mae pretrain in Multi-node Multi-gpu distributed using network ? Can you provide a script?