OFA-Sys / OFA

Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Apache License 2.0
2.41k stars 248 forks source link

how to enable multi-node training? #363

Closed JulioZhao97 closed 1 year ago

JulioZhao97 commented 1 year ago

I want to train OFA on slurm cluster (2 nodes and each node has 8gpus).

Following the official instructions of OFA, I add --distributed_port=12345, and run script is as follows:

#!/usr/bin/env

# The port for communication. Note that if you want to run multiple tasks on the same machine,
# you need to specify different port numbers.
export PYTHONPATH=$PYTHONPATH:/nvme/zhaozhiyuan/hthl/OFA/fairseq/
export MASTER_PORT=1052
export CUDA_VISIBLE_DEVICES=8
export GPUS_PER_NODE=0,1,2,3,4,5,6,7

bpe_dir=../../utils/BPE
user_dir=../../ofa_module

restore_file=../../checkpoints/ofa_base.pt

data_dir=../../dataset/pretrain_data
neg_sample_dir=${data_dir}/negative_sample
data=${data_dir}/vision_language_examples.tsv
text_data=${data_dir}/text_examples.tsv
image_data=${data_dir}/image_examples.tsv
detection_data=${data_dir}/detection_examples.tsv
#detection_data=${data_dir}/objs365_det.tsv

selected_cols=0,1,2,3,4,5,6,7
text_selected_cols=0,1
image_selected_cols=0,1,2
detection_selected_cols=0,1,2

task=unify_task
arch=ofa_base
criterion=adjust_label_smoothed_cross_entropy
label_smoothing=0.0
lr=1e-4
max_epoch=50
warmup_ratio=0.01
batch_size=4
update_freq=1
resnet_drop_path_rate=0.0
encoder_drop_path_rate=0.1
decoder_drop_path_rate=0.1
dropout=0.1
attention_dropout=0.0
max_src_length=80
max_tgt_length=30
num_bins=1000
patch_image_size=384
sample_patch_num=196
max_image_size=512

save_path=./checkpoints

python3 -m torch.distributed.launch --nproc_per_node=${GPUS_PER_NODE} --master_port=${MASTER_PORT} ../../train.py \
  $data \
  --text-data=${text_data} \
  --image-data=${image_data} \
  --detection-data=${detection_data} \
  --selected-cols=${selected_cols} \
  --text-selected-cols=${text_selected_cols} \
  --image-selected-cols=${image_selected_cols} \
  --detection-selected-cols=${detection_selected_cols} \
  --bpe-dir=${bpe_dir} \
  --user-dir=${user_dir} \
  --restore-file=${restore_file} \
  --reset-optimizer --reset-dataloader --reset-meters \
  --save-dir=${save_path} \
  --neg-sample-dir=${neg_sample_dir} \
  --task=${task} \
  --arch=${arch} \
  --criterion=${criterion} \
  --label-smoothing=${label_smoothing} \
  --batch-size=${batch_size} \
  --update-freq=${update_freq} \
  --encoder-normalize-before \
  --decoder-normalize-before \
  --share-decoder-input-output-embed \
  --share-all-embeddings \
  --layernorm-embedding \
  --patch-layernorm-embedding \
  --code-layernorm-embedding \
  --resnet-drop-path-rate=${resnet_drop_path_rate} \
  --encoder-drop-path-rate=${encoder_drop_path_rate} \
  --decoder-drop-path-rate=${decoder_drop_path_rate} \
  --dropout=${dropout} \
  --attention-dropout=${attention_dropout} \
  --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=5.0 \
  --lr-scheduler=polynomial_decay --lr=${lr} \
  --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \
  --log-format=simple --log-interval=10 \
  --fixed-validation-seed=7 \
  --keep-last-epochs=15 \
  --save-interval=1 \
  --save-interval-updates=6000 \
  --disable-validation \
  --max-src-length=${max_src_length} \
  --max-tgt-length=${max_tgt_length} \
  --add-type-embedding \
  --scale-attn \
  --scale-fc \
  --scale-heads \
  --disable-entangle \
  --num-bins=${num_bins} \
  --patch-image-size=${patch_image_size} \
  --sample-patch-num=${sample_patch_num} \
  --max-image-size=${max_image_size} \
  --fp16 \
  --fp16-scale-window=128 \
  --num-workers=0 \
  --distributed_port=12345 \
  --ddp-backend=no_c10d

Then I run

srun -p xxx --job-name=xxx --gres=gpu:8  --nodes 2 --ntasks-per-node 8 --quotatype=auto bash pretrain_ofa_base.sh

which gives me 2 nodes and each node has 8gpus.

However the progress keeps throwing out error:

Traceback (most recent call last):
  File "/mnt/lustre/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/runpy.py", line 192, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/mnt/lustre/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
    elastic_launch(
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 252, in launch_agent
    result = agent.run()
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 709, in run
    result = self._invoke_run(role)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 837, in _invoke_run
    self._initialize_workers(self._worker_group)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 678, in _initialize_workers
    self._rendezvous(worker_group)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 538, in _rendezvous
    store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()
  File "/mnt/petrelfs/zhaozhiyuan/anaconda3/envs/ofa/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/static_tcp_rendezvous.py", line 55, in next_rendezvous
    self._store = TCPStore(  # type: ignore[call-arg]
RuntimeError: Address already in use

Then I watch the node I notice that only one of two nodes is running and the another one has no progress running.

Can you kindly tell me how to fix this? Thanks!

JulioZhao97 commented 1 year ago

fixed