ai-forever / ru-gpts

Russian GPT3 models.
Apache License 2.0
2.08k stars 444 forks source link

Can't process data when finetuning GPT3-XL #41

Closed alik1993 closed 3 years ago

alik1993 commented 3 years ago

I tried to finetune GPT3-XL via deepspeed_gpt3_xl.sh script. I downloaded and prepared data as in Finetune_and_generate_RuGPTs_deepspeed_megatron.ipynb and also added argument --tokenizer-path sberbank-ai/rugpt3xl to deepspeed_gpt3_xl.sh.

But running the script through an error:

USE_DEEPSPEED=1 mpirun --np 1 python pretrain_gpt3.py --train-data-path train.list --test-data-path valid.list --logging-dir=logs/ --save model --save-interval 1000 --model-parallel-size 1 --num-layers 24 --hidden-size 2048 --num-attention-heads 16 --batch-size 1 --seq-length 2048 --max-position-embeddings 2048 --train-iters 5 --resume-dataloader --distributed-backend nccl --lr 0.0002 --lr-decay-style cosine --weight-decay 1e-2 --warmup .01 --log-interval 100 --fp16 --checkpoint-activations --deepspeed-activation-checkpointing --sparse-mode alternating --deepspeed --deepspeed_config src/deepspeed_config/gpt3_xl_sparse_2048.json --tokenizer-path sberbank-ai/rugpt3xl 2021-02-20 11:36:26.334556: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 using world size: 1 and model-parallel size: 1

using dynamic loss scaling initializing model parallel with size 1 [2021-02-20 11:36:33,987] [INFO] [checkpointing.py:629:_configure_using_config_file] {'partition_activations': False, 'contiguous_memory_optimization': False, 'cpu_checkpointing': False, 'number_checkpoints': None, 'synchronize_checkpoint_boundary': False, 'profile': False} Pretrain GPT3 model arguments: attention_dropout ............ 0.1 num_attention_heads .......... 16 hidden_size .................. 2048 intermediate_size ............ None num_layers ................... 24 layernorm_epsilon ............ 1e-05 hidden_dropout ............... 0.1 max_position_embeddings ...... 2048 vocab_size ................... 30522 deep_init .................... False make_vocab_size_divisible_by . 8 cpu_optimizer ................ False cpu_torch_adam ............... False sparse_mode .................. alternating fp16 ......................... True fp32_embedding ............... False fp32_layernorm ............... False fp32_tokentypes .............. False fp32_allreduce ............... False hysteresis ................... 2 loss_scale ................... None loss_scale_window ............ 1000 min_scale .................... 1 batch_size ................... 1 weight_decay ................. 0.01 checkpoint_activations ....... True checkpoint_num_layers ........ 1 deepspeed_activation_checkpointing True clip_grad .................... 1.0 train_iters .................. 5 log_interval ................. 100 logging_dir .................. logs/ exit_interval ................ None seed ......................... 1234 reset_position_ids ........... False reset_attention_mask ......... False lr_decay_iters ............... None lr_decay_style ............... cosine lr ........................... 0.0002 min_lr ....................... 1e-06 warmup ....................... 0.01 save ......................... model save_interval ................ 1000 no_save_optim ................ False no_save_rng .................. False load ......................... None no_load_optim ................ False log_memory ................... False no_load_rng .................. False load_huggingface ............. None export_huggingface ........... None huggingface_double_pos_embeddings False load_tag ..................... cacheprefix ................. finetune ..................... False resume_dataloader ............ True distributed_backend .......... nccl local_rank ................... 0 eval_batch_size .............. None eval_iters ................... 100 eval_interval ................ 1000 eval_seq_length .............. None eval_max_preds_per_seq ....... None overlapping_eval ............. 32 cloze_eval ................... False eval_hf ...................... False load_openai .................. False temperature .................. 1.0 top_p ........................ 0.0 top_k ........................ 0 out_seq_length ............... 256 tg_token_name ................ token.txt model_parallel_size .......... 1 shuffle ...................... False train_data ................... None use_npy_data_loader .......... False train_data_path .............. train.list val_data_path ................ test_data_path ............... valid.list input_data_sizes_file ........ sizes.txt delim ........................ , text_key ..................... sentence eval_text_key ................ None valid_data ................... None split ........................ 1000,1,1 test_data .................... None overwrite_cache .............. False lazy_loader .................. False loose_json ................... False presplit_sentences ........... False num_workers .................. 2 tokenizer_path ............... sberbank-ai/rugpt3xl cache_dir .................... None use_tfrecords ................ False seq_length ................... 2048 max_files_per_process ........ 50000 max_preds_per_seq ............ None deepspeed .................... True deepspeed_config ............. src/deepspeed_config/gpt3_xl_sparse_2048.json deepscale .................... False deepscale_config ............. None deepspeed_mpi ................ False cuda ......................... True rank ......................... 0 world_size ................... 1 dynamic_loss_scale ........... True [2021-02-20 11:36:33,987] [INFO] [checkpointing.py:256:model_parallel_cuda_manual_seed] > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 Load tokenizer from sberbank-ai/rugpt3xl Load RuGPT3 Dataset from train.list, 50000 files per process /home/atuthvatullin/environments/albert/lib/python3.6/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp R0/1: Loading dataset train.list R0/1: Check filelist train.list with root dir R0/1: Shard [0, 1] R0/1: Loaded 0/1 files Traceback (most recent call last): File "pretrain_gpt3.py", line 830, in main() File "pretrain_gpt3.py", line 783, in main train_data, val_data, test_data, args.vocab_size, args.eod_token, tokenizer = get_train_val_test_data(args) File "pretrain_gpt3.py", line 681, in get_train_val_test_data (train_data, val_data, test_data), num_tokens, eod_token, tokenizer = make_gpt3_dataloaders(args) File "/home/atuthvatullin/ru-gpts2/src/gpt3_data_loader.py", line 104, in make_gpt3_dataloaders train = make_dataloader(args.train_data_path, train_dataset_args) if args.train_data_path else None File "/home/atuthvatullin/ru-gpts2/src/gpt3_data_loader.py", line 93, in make_dataloader file_path=data_path, File "/home/atuthvatullin/ru-gpts2/src/dataset_rugpt3.py", line 130, in init self.examples = np.vstack(examples) File "<__array_function__ internals>", line 6, in vstack File "/home/atuthvatullin/environments/albert/lib/python3.6/site-packages/numpy/core/shape_base.py", line 283, in vstack return _nx.concatenate(arrs, 0) File "<__array_function__ internals>", line 6, in concatenate ValueError: need at least one array to concatenate

Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted.


mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was:

Process name: [[44749,1],0] Exit code: 1