facebookresearch / mae

PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
Other
6.93k stars 1.17k forks source link

Multi-node Multi-gpu distributed training #48

Open JIAOJIAYUASD opened 2 years ago

JIAOJIAYUASD commented 2 years ago

hello, I wan't to ask how to train mae pretrain in Multi-node Multi-gpu distributed using network ? Can you provide a script?

BIGBALLON commented 2 years ago

submitit is used for "Multi-node Multi-gpu distributed training"

JIAOJIAYUASD commented 2 years ago

Allright, maybe the submitit script is for some Server cluster different from I am used,I am solve the problem use the main_pretrain.py script with the torch.distributed.launch. And thanks for answer my question.

BIGBALLON commented 2 years ago

Allright, maybe the submitit script is for some Server cluster different from I am used,I am solve the problem use the main_pretrain.py script with the torch.distributed.launch. And thanks for answer my question.

sure,submitit is used for Slurm cluster. and torch.distributed.launch is another common way.

black0017 commented 2 years ago

Here is an example, in case you were looking for that sort of thing (2 gpus used here):

python -m torch.distributed.launch --nproc_per_node=2 main_pretrain.py  --batch_size 16 \
    --world_size 2 \
    --accum_iter 4
    --model mae_vit_base_patch16 \
    --norm_pix_loss \
    --mask_ratio 0.75 \
    --epochs 800 \
    --warmup_epochs 40 \
    --blr 1.5e-4 --weight_decay 0.05 

PS: works with pytorch 1.8.X

kalyani7195 commented 1 year ago

What changes should I make to the command above if I have two nodes with 8 gpus each and I also want to keep the batch size the same?

black0017 commented 1 year ago

@kalyani7195

Node 1:

python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 main_pretrain.py  --batch_size 16 \
    --world_size 2 \
    --accum_iter 4
    --model mae_vit_base_patch16 \
    --norm_pix_loss \
    --mask_ratio 0.75 \
    --epochs 800 \
    --warmup_epochs 40 \
    --blr 1.5e-4 --weight_decay 0.05 

Node 2:

python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1  main_pretrain.py  --batch_size 16 \
    --world_size 2 \
    --accum_iter 4
    --model mae_vit_base_patch16 \
    --norm_pix_loss \
    --mask_ratio 0.75 \
    --epochs 800 \
    --warmup_epochs 40 \
    --blr 1.5e-4 --weight_decay 0.05 

Modify batch size according to your hardware.

PS: For pytorch >= 1.9 : replace python -m torch.distributed.launch with torchrun PS2: I haven't tried multi-node multi GPU training myself.

Walnutes commented 8 months ago

Here is an example, in case you were looking for that sort of thing (2 gpus used here):

python -m torch.distributed.launch --nproc_per_node=2 main_pretrain.py  --batch_size 16 \
    --world_size 2 \
    --accum_iter 4
    --model mae_vit_base_patch16 \
    --norm_pix_loss \
    --mask_ratio 0.75 \
    --epochs 800 \
    --warmup_epochs 40 \
    --blr 1.5e-4 --weight_decay 0.05 

PS: works with pytorch 1.8.X

Thanks for your sharing! I want to raise another question about the different performance (the speed of loss decreasing) between this two different implementation. As what I have tested on the completely same model, original implementation (based on submitit )performs much better than another implementation w.r.t the speed of loss decreasing in the case of Single V100 and Single node.
Maybe you can also test this interesting thing. Thanks a lot.

JiaxuanFelixLiUNNC commented 3 months ago

Hi, May I ask how to do the pre-training on single GPU?