facebookresearch / mae

PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
Other
7.2k stars 1.2k forks source link

Command for multinode + main_pretrain.py #134

Open kalyani7195 opened 1 year ago

kalyani7195 commented 1 year ago

Hi! Thank you for your great work! I want to replicate the results and for that, I am pretraining MAE on imagenet.

I want to use 2 nodes with 8 gpus each and want to keep the effective batch size same as the released model. So I have set world size to 16. so that the effective batch size becomes 2(nodes) 8(gpus) 4 (accum_iter) * 64 = 4096.

However when I run the following command I keep getting effective batch size = 2048. Am I missing something? Any help regarding this is greatly appreciated!! Thank you!

OMP_NUM_THREADS=1 python -m torch.distributed.launch --nproc_per_node=8 main_pretrain.py --batch_size 64 \ --world_size 16 \ --accum_iter 4 \ --output_dir pt800_mae --log_dir pt800_mae \ --model mae_vit_base_patch16 \ --norm_pix_loss \ --mask_ratio 0.75 \ --epochs 800 \ --warmup_epochs 40 \ --blr 1.5e-4 --weight_decay 0.05 \ --data_path /data/imagenet/