microsoft / Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Other
1.89k stars 344 forks source link

"pretrain_gpt.py: error: unrecognized arguments: --local_rank=1" #178

Closed YJHMITWEB closed 1 year ago

YJHMITWEB commented 1 year ago

Hi, I am using the latest repo. I have installed torch-1.13, cuda 11.6, and deepspeed 0.10.0. And my system has two A100 GPUs. When I run ./ds_pretrain_gpt_125M_MoE64.sh under Megatron-DeepSpeed/examples_deepspeed/MoE, the following errors occur:

deepspeed Megatron-DeepSpeed/examples_deepspeed/MoE/../../pretrain_gpt.py --override-opt_param-scheduler --adam-beta1 0.9 --adam-beta2 0.95 --tensor-model-parallel-size 1 --moe-expert-parallel-size -1 --num-experts 64 --moe-loss-coeff 0.01 --moe-train-capacity-factor 1.0 --moe-eval-capacity-factor 1.0 --moe-min-capacity 4 --init-method-std 0.014 --lr-decay-tokens 300000000000 --lr-warmup-tokens 375000000 --micro-batch-size 4 --exit-duration-in-mins 30000000 --global-batch-size 256 --num-layers 12 --hidden-size 768 --num-attention-heads 12 --seq-length 2048 --max-position-embeddings 2048 --train-tokens 300000000000 --train-iters 1716613 --lr 4.5e-4 --min-lr 4.5e-06 --lr-decay-style cosine --split 98,2,0 --log-interval 10 --eval-interval 100 --eval-iters 10 --save-interval 10000 --weight-decay 0.1 --clip-grad 1.0 --hysteresis 2 --num-workers 0 --fp16 --load Megatron-DeepSpeed/examples_deepspeed/MoE/output/checkpoint/gpt-0.125B-lr-4.5e-4-minlr-4.5e-06-bs-256-gpus--1-mp-1-pp-1-ep-64-mlc-0.01-cap-1.0-drop-true --save Megatron-DeepSpeed/examples_deepspeed/MoE/output/checkpoint/gpt-0.125B-lr-4.5e-4-minlr-4.5e-06-bs-256-gpus--1-mp-1-pp-1-ep-64-mlc-0.01-cap-1.0-drop-true --tensorboard-queue-size 1 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --tensorboard-dir Megatron-DeepSpeed/examples_deepspeed/MoE/output/tensorboard/gpt-0.125B-lr-4.5e-4-minlr-4.5e-06-bs-256-gpus--1-mp-1-pp-1-ep-64-mlc-0.01-cap-1.0-drop-true_a100-13.cluster_2023.07.22-09.58.29 --checkpoint-activations --create-moe-param-group --vocab-file /data/the_pile_public_merged_nopreprocessing/gpt2-vocab.json --merge-file /data/the_pile_public_merged_nopreprocessing/gpt2-merges.txt --data-path /vc_data_blob/users/conglli/the_pile_public_merged_nopreprocessing/pile_text_document --data-impl mmap --deepspeed --deepspeed_config ds_config_gpt_gpt-0.125B-lr-4.5e-4-minlr-4.5e-06-bs-256-gpus--1-mp-1-pp-1-ep-64-mlc-0.01-cap-1.0-drop-true.json --pipeline-model-parallel-size 1 --no-pipeline-parallel --deepspeed-activation-checkpointing
[2023-07-22 09:58:30,511] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-07-22 09:58:31,530] [WARNING] [runner.py:196:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-07-22 09:58:31,546] [INFO] [runner.py:555:main] cmd = /miniconda3/envs/deepspeed/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None Megatron-DeepSpeed/examples_deepspeed/MoE/../../pretrain_gpt.py --override-opt_param-scheduler --adam-beta1 0.9 --adam-beta2 0.95 --tensor-model-parallel-size 1 --moe-expert-parallel-size -1 --num-experts 64 --moe-loss-coeff 0.01 --moe-train-capacity-factor 1.0 --moe-eval-capacity-factor 1.0 --moe-min-capacity 4 --init-method-std 0.014 --lr-decay-tokens 300000000000 --lr-warmup-tokens 375000000 --micro-batch-size 4 --exit-duration-in-mins 30000000 --global-batch-size 256 --num-layers 12 --hidden-size 768 --num-attention-heads 12 --seq-length 2048 --max-position-embeddings 2048 --train-tokens 300000000000 --train-iters 1716613 --lr 4.5e-4 --min-lr 4.5e-06 --lr-decay-style cosine --split 98,2,0 --log-interval 10 --eval-interval 100 --eval-iters 10 --save-interval 10000 --weight-decay 0.1 --clip-grad 1.0 --hysteresis 2 --num-workers 0 --fp16 --load Megatron-DeepSpeed/examples_deepspeed/MoE/output/checkpoint/gpt-0.125B-lr-4.5e-4-minlr-4.5e-06-bs-256-gpus--1-mp-1-pp-1-ep-64-mlc-0.01-cap-1.0-drop-true --save Megatron-DeepSpeed/examples_deepspeed/MoE/output/checkpoint/gpt-0.125B-lr-4.5e-4-minlr-4.5e-06-bs-256-gpus--1-mp-1-pp-1-ep-64-mlc-0.01-cap-1.0-drop-true --tensorboard-queue-size 1 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --tensorboard-dir Megatron-DeepSpeed/examples_deepspeed/MoE/output/tensorboard/gpt-0.125B-lr-4.5e-4-minlr-4.5e-06-bs-256-gpus--1-mp-1-pp-1-ep-64-mlc-0.01-cap-1.0-drop-true_a100-13.cluster_2023.07.22-09.58.29 --checkpoint-activations --create-moe-param-group --vocab-file /data/the_pile_public_merged_nopreprocessing/gpt2-vocab.json --merge-file /data/the_pile_public_merged_nopreprocessing/gpt2-merges.txt --data-path /vc_data_blob/users/conglli/the_pile_public_merged_nopreprocessing/pile_text_document --data-impl mmap --deepspeed --deepspeed_config ds_config_gpt_gpt-0.125B-lr-4.5e-4-minlr-4.5e-06-bs-256-gpus--1-mp-1-pp-1-ep-64-mlc-0.01-cap-1.0-drop-true.json --pipeline-model-parallel-size 1 --no-pipeline-parallel --deepspeed-activation-checkpointing
[2023-07-22 09:58:32,772] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-07-22 09:58:33,717] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2023-07-22 09:58:33,717] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=2, node_rank=0
[2023-07-22 09:58:33,717] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2023-07-22 09:58:33,717] [INFO] [launch.py:163:main] dist_world_size=2
[2023-07-22 09:58:33,717] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1
[2023-07-22 09:58:34,883] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-07-22 09:58:35,001] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [YES] ...... [OKAY]
cpu_adagrad ............ [YES] ...... [OKAY]
cpu_adam ............... [YES] ...... [OKAY]
fused_adam ............. [YES] ...... [OKAY]
fused_lamb ............. [YES] ...... [OKAY]
quantizer .............. [YES] ...... [OKAY]
random_ltd ............. [YES] ...... [OKAY]
 [WARNING]  please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
async_io ............... [YES] ...... [OKAY]
cpu_adagrad ............ [YES] ...... [OKAY]
cpu_adam ............... [YES] ...... [OKAY]
fused_adam ............. [YES] ...... [OKAY]
fused_lamb ............. [YES] ...... [OKAY]
quantizer .............. [YES] ...... [OKAY]
random_ltd ............. [YES] ...... [OKAY]
 [WARNING]  please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [YES] ...... [OKAY]
transformer ............ [YES] ...... [OKAY]
stochastic_transformer . [YES] ...... [OKAY]
spatial_inference ...... [YES] ...... [OKAY]
transformer ............ [YES] ...... [OKAY]
stochastic_transformer . [YES] ...... [OKAY]
transformer_inference .. [YES] ...... [OKAY]
--------------------------------------------------
transformer_inference .. [YES] ...... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/miniconda3/envs/deepspeed/lib/python3.10/site-packages/torch']
torch version .................... 1.13.1+cu116
deepspeed install path ........... ['/miniconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.10.0, unknown, unknown
torch cuda version ............... 11.6
torch hip version ................ None
nvcc version ..................... 11.6
deepspeed wheel compiled w. ...... torch 1.13, cuda 11.6
DeepSpeed general environment info:
torch install path ............... ['/miniconda3/envs/deepspeed/lib/python3.10/site-packages/torch']
torch version .................... 1.13.1+cu116
deepspeed install path ........... ['/miniconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.10.0, unknown, unknown
torch cuda version ............... 11.6
torch hip version ................ None
nvcc version ..................... 11.6
deepspeed wheel compiled w. ...... torch 1.13, cuda 11.6
**** Git info for Megatron: git_hash=cddf673 git_branch=main ****
**** Git info for Megatron: git_hash=cddf673 git_branch=main ****
usage: pretrain_gpt.py [-h] [--num-layers NUM_LAYERS] [--encoder-num-layers ENCODER_NUM_LAYERS] [--decoder-num-layers DECODER_NUM_LAYERS] [--num-experts NUM_EXPERTS [NUM_EXPERTS ...]] [--mlp-type MLP_TYPE]
                       [--topk TOPK] [--expert-interval EXPERT_INTERVAL] [--hidden-size HIDDEN_SIZE] [--ffn-hidden-size FFN_HIDDEN_SIZE] [--num-attention-heads NUM_ATTENTION_HEADS] [--kv-channels KV_CHANNELS]
                       [--max-position-embeddings MAX_POSITION_EMBEDDINGS] [--use-rotary-position-embeddings] [--rotary-percent ROTARY_PERCENT] [--no-position-embedding]
                       [--make-vocab-size-divisible-by MAKE_VOCAB_SIZE_DIVISIBLE_BY] [--normalization {layernorm,rmsnorm}] [--layernorm-epsilon LAYERNORM_EPSILON] [--apply-layernorm-1p]
                       [--apply-residual-connection-post-layernorm] [--openai-gelu] [--squared-relu] [--swiglu] [--onnx-safe ONNX_SAFE] [--bert-no-binary-head] [--num-experts-switch NUM_EXPERTS_SWITCH]
                       [--untie-embeddings-and-output-weights] [--embedding-weights-in-fp32] [--attention-dropout ATTENTION_DROPOUT] [--hidden-dropout HIDDEN_DROPOUT] [--weight-decay WEIGHT_DECAY]
                       [--start-weight-decay START_WEIGHT_DECAY] [--end-weight-decay END_WEIGHT_DECAY] [--weight-decay-incr-style {constant,linear,cosine}] [--clip-grad CLIP_GRAD] [--adam-beta1 ADAM_BETA1]
                       [--adam-beta2 ADAM_BETA2] [--adam-eps ADAM_EPS] [--sgd-momentum SGD_MOMENTUM] [--micro-batch-size MICRO_BATCH_SIZE] [--batch-size BATCH_SIZE] [--global-batch-size GLOBAL_BATCH_SIZE]
                       [--rampup-batch-size [RAMPUP_BATCH_SIZE ...]] [--recompute-activations] [--recompute-granularity {full,selective}] [--distribute-saved-activations] [--recompute-method {uniform,block}]
                       [--recompute-num-layers RECOMPUTE_NUM_LAYERS] [--checkpoint-activations] [--distribute-checkpointed-activations] [--checkpoint-num-layers CHECKPOINT_NUM_LAYERS] [--train-iters TRAIN_ITERS]
                       [--train-samples TRAIN_SAMPLES] [--train-tokens TRAIN_TOKENS] [--random-ltd] [--log-interval LOG_INTERVAL] [--exit-interval EXIT_INTERVAL] [--exit-duration-in-mins EXIT_DURATION_IN_MINS]
                       [--exit-signal-handler] [--tensorboard-dir TENSORBOARD_DIR] [--no-masked-softmax-fusion] [--no-bias-gelu-fusion] [--no-bias-dropout-fusion] [--disable-moe-token-dropping]
                       [--moe-train-capacity-factor MOE_TRAIN_CAPACITY_FACTOR] [--moe-eval-capacity-factor MOE_EVAL_CAPACITY_FACTOR] [--moe-min-capacity MOE_MIN_CAPACITY] [--moe-loss-coeff MOE_LOSS_COEFF]
                       [--create-moe-param-group] [--use-flash-attn] [--disable-bias-linear] [--optimizer {adam,sgd}] [--dataloader-type {single,cyclic}] [--ds-inference] [--cpu-optimizer] [--cpu_torch_adam]
                       [--no-pipeline-parallel] [--use-tutel] [--inference] [--no-async-tensor-model-parallel-allreduce] [--no-persist-layer-norm] [--sequence-parallel] [--no-gradient-accumulation-fusion]
                       [--seed SEED] [--data-parallel-random-init] [--init-method-std INIT_METHOD_STD] [--init-method-xavier-uniform] [--lr LR] [--lr-decay-style {constant,linear,cosine,inverse-square-root}]
                       [--lr-decay-iters LR_DECAY_ITERS] [--lr-decay-samples LR_DECAY_SAMPLES] [--lr-decay-tokens LR_DECAY_TOKENS] [--lr-warmup-fraction LR_WARMUP_FRACTION] [--lr-warmup-iters LR_WARMUP_ITERS]
                       [--lr-warmup-samples LR_WARMUP_SAMPLES] [--lr-warmup-tokens LR_WARMUP_TOKENS] [--warmup WARMUP] [--min-lr MIN_LR] [--override-opt_param-scheduler] [--use-checkpoint-opt_param-scheduler]
                       [--save SAVE] [--save-interval SAVE_INTERVAL] [--no-save-optim] [--no-save-rng] [--load LOAD] [--no-load-optim] [--no-load-rng] [--no-load-lr-state] [--finetune] [--no-initialization]
                       [--use-checkpoint-args] [--exit-on-missing-checkpoint] [--fp16] [--bf16] [--loss-scale LOSS_SCALE] [--initial-loss-scale INITIAL_LOSS_SCALE] [--min-loss-scale MIN_LOSS_SCALE]
                       [--loss-scale-window LOSS_SCALE_WINDOW] [--hysteresis HYSTERESIS] [--fp32-residual-connection] [--no-query-key-layer-scaling] [--attention-softmax-in-fp32]
                       [--accumulate-allreduce-grads-in-fp32] [--fp16-lm-cross-entropy] [--tensor-model-parallel-size TENSOR_MODEL_PARALLEL_SIZE] [--enable-expert-tensor-parallelism]
                       [--pipeline-model-parallel-size PIPELINE_MODEL_PARALLEL_SIZE] [--pipeline-model-parallel-split-rank PIPELINE_MODEL_PARALLEL_SPLIT_RANK] [--moe-expert-parallel-size MOE_EXPERT_PARALLEL_SIZE]
                       [--model-parallel-size MODEL_PARALLEL_SIZE] [--num-layers-per-virtual-pipeline-stage NUM_LAYERS_PER_VIRTUAL_PIPELINE_STAGE] [--overlap-p2p-communication]
                       [--distributed-backend {nccl,gloo,ccl}] [--distributed-timeout-minutes DISTRIBUTED_TIMEOUT_MINUTES] [--DDP-impl {local,torch,FSDP}] [--no-contiguous-buffers-in-local-ddp]
                       [--no-scatter-gather-tensors-in-pipeline] [--use-ring-exchange-p2p] [--local-rank LOCAL_RANK] [--lazy-mpu-init LAZY_MPU_INIT] [--use-cpu-initialization] [--empty-unused-memory-level {0,1,2}]
                       [--standalone-embedding-stage] [--use-distributed-optimizer] [--eval-iters EVAL_ITERS] [--eval-interval EVAL_INTERVAL] [--skip-train] [--aml-data-download-path AML_DATA_DOWNLOAD_PATH]
                       [--data-path [DATA_PATH ...]] [--split SPLIT] [--train-data-path [TRAIN_DATA_PATH ...]] [--valid-data-path [VALID_DATA_PATH ...]] [--test-data-path [TEST_DATA_PATH ...]]
                       [--data-cache-path DATA_CACHE_PATH] [--vocab-size VOCAB_SIZE] [--vocab-file VOCAB_FILE] [--merge-file MERGE_FILE] [--vocab-extra-ids VOCAB_EXTRA_IDS] [--seq-length SEQ_LENGTH]
                       [--encoder-seq-length ENCODER_SEQ_LENGTH] [--decoder-seq-length DECODER_SEQ_LENGTH] [--retriever-seq-length RETRIEVER_SEQ_LENGTH] [--sample-rate SAMPLE_RATE] [--mask-prob MASK_PROB]
                       [--short-seq-prob SHORT_SEQ_PROB] [--mmap-warmup] [--num-workers NUM_WORKERS]
                       [--tokenizer-type {BertWordPieceLowerCase,BertWordPieceCase,GPT2BPETokenizer,SentencePieceTokenizer,GPTSentencePieceTokenizer,NullTokenizer}] [--tokenizer-model TOKENIZER_MODEL]
                       [--data-impl {mmap,infer}] [--reset-position-ids] [--reset-attention-mask] [--eod-mask-loss] [--train-data-exact-num-epochs TRAIN_DATA_EXACT_NUM_EPOCHS] [--return-data-index]
                       [--data-efficiency-curriculum-learning] [--train-idx-path TRAIN_IDX_PATH] [--train-desc-path TRAIN_DESC_PATH] [--train-doc-idx-path TRAIN_DOC_IDX_PATH]
                       [--train-sample-idx-path TRAIN_SAMPLE_IDX_PATH] [--train-shuffle-idx-path TRAIN_SHUFFLE_IDX_PATH] [--adlr-autoresume] [--adlr-autoresume-interval ADLR_AUTORESUME_INTERVAL]
                       [--ict-head-size ICT_HEAD_SIZE] [--biencoder-projection-dim BIENCODER_PROJECTION_DIM] [--biencoder-shared-query-context-model] [--ict-load ICT_LOAD] [--bert-load BERT_LOAD]
                       [--titles-data-path TITLES_DATA_PATH] [--query-in-block-prob QUERY_IN_BLOCK_PROB] [--use-one-sent-docs] [--evidence-data-path EVIDENCE_DATA_PATH]
                       [--retriever-report-topk-accuracies RETRIEVER_REPORT_TOPK_ACCURACIES [RETRIEVER_REPORT_TOPK_ACCURACIES ...]] [--retriever-score-scaling] [--block-data-path BLOCK_DATA_PATH]
                       [--embedding-path EMBEDDING_PATH] [--indexer-batch-size INDEXER_BATCH_SIZE] [--indexer-log-interval INDEXER_LOG_INTERVAL] [--num-classes NUM_CLASSES] [--img-h IMG_H] [--img-w IMG_W]
                       [--num-channels NUM_CHANNELS] [--patch-dim PATCH_DIM] [--classes-fraction CLASSES_FRACTION] [--data-per-class-fraction DATA_PER_CLASS_FRACTION] [--no-data-sharding]
                       [--head-lr-mult HEAD_LR_MULT] [--vision-pretraining] [--vision-pretraining-type {classify,inpaint,dino}] [--vision-backbone-type {vit,mit,swin}] [--swin-backbone-type {tiny,base,h3}]
                       [--mask-type {random,row}] [--mask-factor MASK_FACTOR] [--iter-per-epoch ITER_PER_EPOCH] [--dino-local-img-size DINO_LOCAL_IMG_SIZE] [--dino-local-crops-number DINO_LOCAL_CROPS_NUMBER]
                       [--dino-head-hidden-size DINO_HEAD_HIDDEN_SIZE] [--dino-bottleneck-size DINO_BOTTLENECK_SIZE] [--dino-freeze-last-layer DINO_FREEZE_LAST_LAYER] [--dino-norm-last-layer]
                       [--dino-warmup-teacher-temp DINO_WARMUP_TEACHER_TEMP] [--dino-teacher-temp DINO_TEACHER_TEMP] [--dino-warmup-teacher-temp-epochs DINO_WARMUP_TEACHER_TEMP_EPOCHS] [--log-params-norm]
                       [--log-num-zeros-in-grad] [--timing-log-level {0,1,2}] [--no-barrier-with-level-1-timing] [--timing-log-option {max,minmax,all}] [--tensorboard-log-interval TENSORBOARD_LOG_INTERVAL]
                       [--tensorboard-queue-size TENSORBOARD_QUEUE_SIZE] [--log-timers-to-tensorboard] [--log-batch-size-to-tensorboard] [--no-log-learnig-rate-to-tensorboard] [--no-log-loss-scale-to-tensorboard]
                       [--log-validation-ppl-to-tensorboard] [--log-optimizer-states-to-tensorboard] [--log-memory-to-tensorboard] [--log-world-size-to-tensorboard] [--zero-stage ZERO_STAGE]
                       [--zero-reduce-scatter] [--zero-contigious-gradients] [--zero-reduce-bucket-size ZERO_REDUCE_BUCKET_SIZE] [--zero-allgather-bucket-size ZERO_ALLGATHER_BUCKET_SIZE]
                       [--remote-device {none,cpu,nvme}] [--use-pin-memory] [--scattered-embeddings] [--split-transformers] [--memory-centric-tiled-linear] [--tile-factor TILE_FACTOR]
                       [--deepspeed-activation-checkpointing] [--partition-activations] [--contigious-checkpointing] [--checkpoint-in-cpu] [--synchronize-each-layer] [--profile-backward]
                       [--num-layers-teacher NUM_LAYERS_TEACHER] [--num-experts-teacher NUM_EXPERTS_TEACHER [NUM_EXPERTS_TEACHER ...]] [--hidden-size-teacher HIDDEN_SIZE_TEACHER]
                       [--num-attention-heads-teacher NUM_ATTENTION_HEADS_TEACHER] [--mos] [--kd] [--kd-alpha-ce KD_ALPHA_CE] [--kd-beta-ce KD_BETA_CE] [--kd-temp KD_TEMP] [--reset-iteration]
                       [--load-teacher LOAD_TEACHER] [--inference-batch-times-seqlen-threshold INFERENCE_BATCH_TIMES_SEQLEN_THRESHOLD] [--max-tokens-to-oom MAX_TOKENS_TO_OOM] [--output-bert-embeddings]
                       [--bert-embedder-type {megatron,huggingface}] [--fp8-e4m3] [--fp8-hybrid] [--no-fp8-wgrad] [--fp8-margin FP8_MARGIN] [--fp8-interval FP8_INTERVAL]
                       [--transformer-impl {local,transformer_engine}] [--fp8-amax-history-len FP8_AMAX_HISTORY_LEN] [--fp8-amax-compute-algo {most_recent,max}] [--retro-workdir RETRO_WORKDIR]
                       [--retro-add-retriever] [--retro-cyclic-train-iters RETRO_CYCLIC_TRAIN_ITERS] [--retro-encoder-layers RETRO_ENCODER_LAYERS] [--retro-encoder-hidden-dropout RETRO_ENCODER_HIDDEN_DROPOUT]
                       [--retro-encoder-attention-dropout RETRO_ENCODER_ATTENTION_DROPOUT] [--retro-num-neighbors RETRO_NUM_NEIGHBORS] [--retro-num-retrieved-chunks RETRO_NUM_RETRIEVED_CHUNKS]
                       [--retro-return-doc-ids] [--deepspeed] [--deepspeed_config DEEPSPEED_CONFIG] [--deepscale] [--deepscale_config DEEPSCALE_CONFIG] [--deepspeed_mpi]
pretrain_gpt.py: error: unrecognized arguments: --local_rank=0
usage: pretrain_gpt.py [-h] [--num-layers NUM_LAYERS] [--encoder-num-layers ENCODER_NUM_LAYERS] [--decoder-num-layers DECODER_NUM_LAYERS] [--num-experts NUM_EXPERTS [NUM_EXPERTS ...]] [--mlp-type MLP_TYPE]
                       [--topk TOPK] [--expert-interval EXPERT_INTERVAL] [--hidden-size HIDDEN_SIZE] [--ffn-hidden-size FFN_HIDDEN_SIZE] [--num-attention-heads NUM_ATTENTION_HEADS] [--kv-channels KV_CHANNELS]
                       [--max-position-embeddings MAX_POSITION_EMBEDDINGS] [--use-rotary-position-embeddings] [--rotary-percent ROTARY_PERCENT] [--no-position-embedding]
                       [--make-vocab-size-divisible-by MAKE_VOCAB_SIZE_DIVISIBLE_BY] [--normalization {layernorm,rmsnorm}] [--layernorm-epsilon LAYERNORM_EPSILON] [--apply-layernorm-1p]
                       [--apply-residual-connection-post-layernorm] [--openai-gelu] [--squared-relu] [--swiglu] [--onnx-safe ONNX_SAFE] [--bert-no-binary-head] [--num-experts-switch NUM_EXPERTS_SWITCH]
                       [--untie-embeddings-and-output-weights] [--embedding-weights-in-fp32] [--attention-dropout ATTENTION_DROPOUT] [--hidden-dropout HIDDEN_DROPOUT] [--weight-decay WEIGHT_DECAY]
                       [--start-weight-decay START_WEIGHT_DECAY] [--end-weight-decay END_WEIGHT_DECAY] [--weight-decay-incr-style {constant,linear,cosine}] [--clip-grad CLIP_GRAD] [--adam-beta1 ADAM_BETA1]
                       [--adam-beta2 ADAM_BETA2] [--adam-eps ADAM_EPS] [--sgd-momentum SGD_MOMENTUM] [--micro-batch-size MICRO_BATCH_SIZE] [--batch-size BATCH_SIZE] [--global-batch-size GLOBAL_BATCH_SIZE]
                       [--rampup-batch-size [RAMPUP_BATCH_SIZE ...]] [--recompute-activations] [--recompute-granularity {full,selective}] [--distribute-saved-activations] [--recompute-method {uniform,block}]
                       [--recompute-num-layers RECOMPUTE_NUM_LAYERS] [--checkpoint-activations] [--distribute-checkpointed-activations] [--checkpoint-num-layers CHECKPOINT_NUM_LAYERS] [--train-iters TRAIN_ITERS]
                       [--train-samples TRAIN_SAMPLES] [--train-tokens TRAIN_TOKENS] [--random-ltd] [--log-interval LOG_INTERVAL] [--exit-interval EXIT_INTERVAL] [--exit-duration-in-mins EXIT_DURATION_IN_MINS]
                       [--exit-signal-handler] [--tensorboard-dir TENSORBOARD_DIR] [--no-masked-softmax-fusion] [--no-bias-gelu-fusion] [--no-bias-dropout-fusion] [--disable-moe-token-dropping]
                       [--moe-train-capacity-factor MOE_TRAIN_CAPACITY_FACTOR] [--moe-eval-capacity-factor MOE_EVAL_CAPACITY_FACTOR] [--moe-min-capacity MOE_MIN_CAPACITY] [--moe-loss-coeff MOE_LOSS_COEFF]
                       [--create-moe-param-group] [--use-flash-attn] [--disable-bias-linear] [--optimizer {adam,sgd}] [--dataloader-type {single,cyclic}] [--ds-inference] [--cpu-optimizer] [--cpu_torch_adam]
                       [--no-pipeline-parallel] [--use-tutel] [--inference] [--no-async-tensor-model-parallel-allreduce] [--no-persist-layer-norm] [--sequence-parallel] [--no-gradient-accumulation-fusion]
                       [--seed SEED] [--data-parallel-random-init] [--init-method-std INIT_METHOD_STD] [--init-method-xavier-uniform] [--lr LR] [--lr-decay-style {constant,linear,cosine,inverse-square-root}]
                       [--lr-decay-iters LR_DECAY_ITERS] [--lr-decay-samples LR_DECAY_SAMPLES] [--lr-decay-tokens LR_DECAY_TOKENS] [--lr-warmup-fraction LR_WARMUP_FRACTION] [--lr-warmup-iters LR_WARMUP_ITERS]
                       [--lr-warmup-samples LR_WARMUP_SAMPLES] [--lr-warmup-tokens LR_WARMUP_TOKENS] [--warmup WARMUP] [--min-lr MIN_LR] [--override-opt_param-scheduler] [--use-checkpoint-opt_param-scheduler]
                       [--save SAVE] [--save-interval SAVE_INTERVAL] [--no-save-optim] [--no-save-rng] [--load LOAD] [--no-load-optim] [--no-load-rng] [--no-load-lr-state] [--finetune] [--no-initialization]
                       [--use-checkpoint-args] [--exit-on-missing-checkpoint] [--fp16] [--bf16] [--loss-scale LOSS_SCALE] [--initial-loss-scale INITIAL_LOSS_SCALE] [--min-loss-scale MIN_LOSS_SCALE]
                       [--loss-scale-window LOSS_SCALE_WINDOW] [--hysteresis HYSTERESIS] [--fp32-residual-connection] [--no-query-key-layer-scaling] [--attention-softmax-in-fp32]
                       [--accumulate-allreduce-grads-in-fp32] [--fp16-lm-cross-entropy] [--tensor-model-parallel-size TENSOR_MODEL_PARALLEL_SIZE] [--enable-expert-tensor-parallelism]
                       [--pipeline-model-parallel-size PIPELINE_MODEL_PARALLEL_SIZE] [--pipeline-model-parallel-split-rank PIPELINE_MODEL_PARALLEL_SPLIT_RANK] [--moe-expert-parallel-size MOE_EXPERT_PARALLEL_SIZE]
                       [--model-parallel-size MODEL_PARALLEL_SIZE] [--num-layers-per-virtual-pipeline-stage NUM_LAYERS_PER_VIRTUAL_PIPELINE_STAGE] [--overlap-p2p-communication]
                       [--distributed-backend {nccl,gloo,ccl}] [--distributed-timeout-minutes DISTRIBUTED_TIMEOUT_MINUTES] [--DDP-impl {local,torch,FSDP}] [--no-contiguous-buffers-in-local-ddp]
                       [--no-scatter-gather-tensors-in-pipeline] [--use-ring-exchange-p2p] [--local-rank LOCAL_RANK] [--lazy-mpu-init LAZY_MPU_INIT] [--use-cpu-initialization] [--empty-unused-memory-level {0,1,2}]
                       [--standalone-embedding-stage] [--use-distributed-optimizer] [--eval-iters EVAL_ITERS] [--eval-interval EVAL_INTERVAL] [--skip-train] [--aml-data-download-path AML_DATA_DOWNLOAD_PATH]
                       [--data-path [DATA_PATH ...]] [--split SPLIT] [--train-data-path [TRAIN_DATA_PATH ...]] [--valid-data-path [VALID_DATA_PATH ...]] [--test-data-path [TEST_DATA_PATH ...]]
                       [--data-cache-path DATA_CACHE_PATH] [--vocab-size VOCAB_SIZE] [--vocab-file VOCAB_FILE] [--merge-file MERGE_FILE] [--vocab-extra-ids VOCAB_EXTRA_IDS] [--seq-length SEQ_LENGTH]
                       [--encoder-seq-length ENCODER_SEQ_LENGTH] [--decoder-seq-length DECODER_SEQ_LENGTH] [--retriever-seq-length RETRIEVER_SEQ_LENGTH] [--sample-rate SAMPLE_RATE] [--mask-prob MASK_PROB]
                       [--short-seq-prob SHORT_SEQ_PROB] [--mmap-warmup] [--num-workers NUM_WORKERS]
                       [--tokenizer-type {BertWordPieceLowerCase,BertWordPieceCase,GPT2BPETokenizer,SentencePieceTokenizer,GPTSentencePieceTokenizer,NullTokenizer}] [--tokenizer-model TOKENIZER_MODEL]
                       [--data-impl {mmap,infer}] [--reset-position-ids] [--reset-attention-mask] [--eod-mask-loss] [--train-data-exact-num-epochs TRAIN_DATA_EXACT_NUM_EPOCHS] [--return-data-index]
                       [--data-efficiency-curriculum-learning] [--train-idx-path TRAIN_IDX_PATH] [--train-desc-path TRAIN_DESC_PATH] [--train-doc-idx-path TRAIN_DOC_IDX_PATH]
                       [--train-sample-idx-path TRAIN_SAMPLE_IDX_PATH] [--train-shuffle-idx-path TRAIN_SHUFFLE_IDX_PATH] [--adlr-autoresume] [--adlr-autoresume-interval ADLR_AUTORESUME_INTERVAL]
                       [--ict-head-size ICT_HEAD_SIZE] [--biencoder-projection-dim BIENCODER_PROJECTION_DIM] [--biencoder-shared-query-context-model] [--ict-load ICT_LOAD] [--bert-load BERT_LOAD]
                       [--titles-data-path TITLES_DATA_PATH] [--query-in-block-prob QUERY_IN_BLOCK_PROB] [--use-one-sent-docs] [--evidence-data-path EVIDENCE_DATA_PATH]
                       [--retriever-report-topk-accuracies RETRIEVER_REPORT_TOPK_ACCURACIES [RETRIEVER_REPORT_TOPK_ACCURACIES ...]] [--retriever-score-scaling] [--block-data-path BLOCK_DATA_PATH]
                       [--embedding-path EMBEDDING_PATH] [--indexer-batch-size INDEXER_BATCH_SIZE] [--indexer-log-interval INDEXER_LOG_INTERVAL] [--num-classes NUM_CLASSES] [--img-h IMG_H] [--img-w IMG_W]
                       [--num-channels NUM_CHANNELS] [--patch-dim PATCH_DIM] [--classes-fraction CLASSES_FRACTION] [--data-per-class-fraction DATA_PER_CLASS_FRACTION] [--no-data-sharding]
                       [--head-lr-mult HEAD_LR_MULT] [--vision-pretraining] [--vision-pretraining-type {classify,inpaint,dino}] [--vision-backbone-type {vit,mit,swin}] [--swin-backbone-type {tiny,base,h3}]
                       [--mask-type {random,row}] [--mask-factor MASK_FACTOR] [--iter-per-epoch ITER_PER_EPOCH] [--dino-local-img-size DINO_LOCAL_IMG_SIZE] [--dino-local-crops-number DINO_LOCAL_CROPS_NUMBER]
                       [--dino-head-hidden-size DINO_HEAD_HIDDEN_SIZE] [--dino-bottleneck-size DINO_BOTTLENECK_SIZE] [--dino-freeze-last-layer DINO_FREEZE_LAST_LAYER] [--dino-norm-last-layer]
                       [--dino-warmup-teacher-temp DINO_WARMUP_TEACHER_TEMP] [--dino-teacher-temp DINO_TEACHER_TEMP] [--dino-warmup-teacher-temp-epochs DINO_WARMUP_TEACHER_TEMP_EPOCHS] [--log-params-norm]
                       [--log-num-zeros-in-grad] [--timing-log-level {0,1,2}] [--no-barrier-with-level-1-timing] [--timing-log-option {max,minmax,all}] [--tensorboard-log-interval TENSORBOARD_LOG_INTERVAL]
                       [--tensorboard-queue-size TENSORBOARD_QUEUE_SIZE] [--log-timers-to-tensorboard] [--log-batch-size-to-tensorboard] [--no-log-learnig-rate-to-tensorboard] [--no-log-loss-scale-to-tensorboard]
                       [--log-validation-ppl-to-tensorboard] [--log-optimizer-states-to-tensorboard] [--log-memory-to-tensorboard] [--log-world-size-to-tensorboard] [--zero-stage ZERO_STAGE]
                       [--zero-reduce-scatter] [--zero-contigious-gradients] [--zero-reduce-bucket-size ZERO_REDUCE_BUCKET_SIZE] [--zero-allgather-bucket-size ZERO_ALLGATHER_BUCKET_SIZE]
                       [--remote-device {none,cpu,nvme}] [--use-pin-memory] [--scattered-embeddings] [--split-transformers] [--memory-centric-tiled-linear] [--tile-factor TILE_FACTOR]
                       [--deepspeed-activation-checkpointing] [--partition-activations] [--contigious-checkpointing] [--checkpoint-in-cpu] [--synchronize-each-layer] [--profile-backward]
                       [--num-layers-teacher NUM_LAYERS_TEACHER] [--num-experts-teacher NUM_EXPERTS_TEACHER [NUM_EXPERTS_TEACHER ...]] [--hidden-size-teacher HIDDEN_SIZE_TEACHER]
                       [--num-attention-heads-teacher NUM_ATTENTION_HEADS_TEACHER] [--mos] [--kd] [--kd-alpha-ce KD_ALPHA_CE] [--kd-beta-ce KD_BETA_CE] [--kd-temp KD_TEMP] [--reset-iteration]
                       [--load-teacher LOAD_TEACHER] [--inference-batch-times-seqlen-threshold INFERENCE_BATCH_TIMES_SEQLEN_THRESHOLD] [--max-tokens-to-oom MAX_TOKENS_TO_OOM] [--output-bert-embeddings]
                       [--bert-embedder-type {megatron,huggingface}] [--fp8-e4m3] [--fp8-hybrid] [--no-fp8-wgrad] [--fp8-margin FP8_MARGIN] [--fp8-interval FP8_INTERVAL]
                       [--transformer-impl {local,transformer_engine}] [--fp8-amax-history-len FP8_AMAX_HISTORY_LEN] [--fp8-amax-compute-algo {most_recent,max}] [--retro-workdir RETRO_WORKDIR]
                       [--retro-add-retriever] [--retro-cyclic-train-iters RETRO_CYCLIC_TRAIN_ITERS] [--retro-encoder-layers RETRO_ENCODER_LAYERS] [--retro-encoder-hidden-dropout RETRO_ENCODER_HIDDEN_DROPOUT]
                       [--retro-encoder-attention-dropout RETRO_ENCODER_ATTENTION_DROPOUT] [--retro-num-neighbors RETRO_NUM_NEIGHBORS] [--retro-num-retrieved-chunks RETRO_NUM_RETRIEVED_CHUNKS]
                       [--retro-return-doc-ids] [--deepspeed] [--deepspeed_config DEEPSPEED_CONFIG] [--deepscale] [--deepscale_config DEEPSCALE_CONFIG] [--deepspeed_mpi]
pretrain_gpt.py: error: unrecognized arguments: --local_rank=1
[2023-07-22 09:58:37,722] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 285149
[2023-07-22 09:58:37,746] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 285150

Further, I tried to modify the .sh script by adding "CUDA_VISIBLE_DEVICES=0,1" to the run_cmd, but the issue still exists.

clumsy commented 1 year ago

The message says it's [--local-rank LOCAL_RANK] now. I believe there was a PR recently that changed it from --local_rank.

YJHMITWEB commented 1 year ago

The message says it's [--local-rank LOCAL_RANK] now. I believe there was a PR recently that changed it from --local_rank.

Thanks, I modified it and now be able to run it successfully.