NVIDIA / Megatron-LM

Ongoing research training transformer models at scale
https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start
Other
10.18k stars 2.29k forks source link

KeyError: 'query_model' during ICT Pretraining #126

Closed DevavratSinghBisht closed 1 month ago

DevavratSinghBisht commented 3 years ago

I tried to use the BERT-345M-uncased model for ICT pretraining but an error is occurring, the complete log is given below. I am thinking that this model is not compatible for the task, I wasn't able to find any other suitable model in the repo, if this is the case then please direct me towards a compatible model.

examples/pretrain_ict.sh: line 8: wiki/models/megatron_bert_345m_v0.1_uncased: Is a directory examples/pretrain_ict.sh: line 9: wiki/processed_data/corpus_indexed_title_sentence: No such file or directory examples/pretrain_ict.sh: line 10: wiki/processed_data/corpus_indexed_text_sentence: No such file or directory examples/pretrain_ict.sh: line 11: wiki/models/megatron_bert_345m_v0.1_uncased: Is a directory

using world size: 1, data-parallel-size: 1, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 WARNING: overriding default arguments for tokenizer_type:BertWordPieceLowerCase with tokenizer_type:BertWordPieceLowerCase setting global batch size to 32 using torch.float16 for parameters ... ------------------------ arguments ------------------------ accumulate_allreduce_grads_in_fp32 .............. False adam_beta1 ...................................... 0.9 adam_beta2 ...................................... 0.999 adam_eps ........................................ 1e-08 adlr_autoresume ................................. False adlr_autoresume_interval ........................ 1000 apply_query_key_layer_scaling ................... True apply_residual_connection_post_layernorm ........ False attention_dropout ............................... 0.1 attention_softmax_in_fp32 ....................... False bert_binary_head ................................ True bert_load ....................................... wiki/models/megatron_bert_345m_v0.1_uncased bf16 ............................................ False bias_dropout_fusion ............................. True bias_gelu_fusion ................................ True biencoder_projection_dim ........................ 0 biencoder_shared_query_context_model ............ False block_data_path ................................. None checkpoint_activations .......................... False checkpoint_num_layers ........................... 1 clip_grad ....................................... 1.0 consumed_train_samples .......................... 0 consumed_valid_samples .......................... 0 data_impl ....................................... infer data_parallel_size .............................. 1 data_path ....................................... ['wiki/processed_data/corpus_indexed_title_sentence'] dataloader_type ................................. single DDP_impl ........................................ torch decoder_seq_length .............................. None distribute_checkpointed_activations ............. False distributed_backend ............................. nccl embedding_path .................................. None encoder_seq_length .............................. 256 eod_mask_loss ................................... False eval_interval ................................... 1000 eval_iters ...................................... 10 evidence_data_path .............................. None exit_duration_in_mins ........................... None exit_interval ................................... 8000 ffn_hidden_size ................................. 3072 finetune ........................................ False fp16 ............................................ True fp16_lm_cross_entropy ........................... False fp32_residual_connection ........................ False global_batch_size ............................... 32 hidden_dropout .................................. 0.1 hidden_size ..................................... 768 hysteresis ...................................... 2 ict_head_size ................................... None ict_load ........................................ None img_dim ......................................... 224 indexer_batch_size .............................. 128 indexer_log_interval ............................ 1000 init_method_std ................................. 0.02 init_method_xavier_uniform ...................... False initial_loss_scale .............................. 4294967296 kv_channels ..................................... 64 layernorm_epsilon ............................... 1e-05 lazy_mpu_init ................................... None load ............................................ wiki/models/megatron_bert_345m_v0.1_uncased local_rank ...................................... None log_batch_size_to_tensorboard ................... False log_interval .................................... 100 log_learning_rate_to_tensorboard ................ True log_loss_scale_to_tensorboard ................... True log_num_zeros_in_grad ........................... False log_params_norm ................................. False log_timers_to_tensorboard ....................... False log_validation_ppl_to_tensorboard ............... False loss_scale ...................................... None loss_scale_window ............................... 1000 lr .............................................. 0.0001 lr_decay_iters .................................. None lr_decay_samples ................................ None lr_decay_style .................................. linear lr_warmup_fraction .............................. 0.01 lr_warmup_iters ................................. 0 lr_warmup_samples ............................... 0 make_vocab_size_divisible_by .................... 128 mask_prob ....................................... 0.15 masked_softmax_fusion ........................... True max_position_embeddings ......................... 512 merge_file ...................................... None micro_batch_size ................................ 32 min_loss_scale .................................. 1.0 min_lr .......................................... 0.0 mmap_warmup ..................................... False no_load_optim ................................... None no_load_rng ..................................... None no_save_optim ................................... None no_save_rng ..................................... None num_attention_heads ............................. 12 num_channels .................................... 3 num_classes ..................................... 1000 num_layers ...................................... 12 num_layers_per_virtual_pipeline_stage ........... None num_workers ..................................... 2 onnx_safe ....................................... None openai_gelu ..................................... False optimizer ....................................... adam override_lr_scheduler ........................... False params_dtype .................................... torch.float16 patch_dim ....................................... 16 pipeline_model_parallel_size .................... 1 query_in_block_prob ............................. 0.1 rampup_batch_size ............................... None rank ............................................ 0 reset_attention_mask ............................ False reset_position_ids .............................. False retriever_report_topk_accuracies ................ [1, 5, 10, 20, 100] retriever_score_scaling ......................... True retriever_seq_length ............................ 256 sample_rate ..................................... 1.0 save ............................................ wiki/models/megatron_bert_345m_v0.1_uncased save_interval ................................... 4000 scatter_gather_tensors_in_pipeline .............. True seed ............................................ 1234 seq_length ...................................... 256 sgd_momentum .................................... 0.9 short_seq_prob .................................. 0.1 split ........................................... 969, 30, 1 tensor_model_parallel_size ...................... 1 tensorboard_dir ................................. None tensorboard_log_interval ........................ 1 tensorboard_queue_size .......................... 1000 titles_data_path ................................ wiki/processed_data/corpus_indexed_title_sentence tokenizer_type .................................. BertWordPieceLowerCase train_iters ..................................... 100000 train_samples ................................... None use_checkpoint_lr_scheduler ..................... False use_contiguous_buffers_in_ddp ................... False use_cpu_initialization .......................... None use_one_sent_docs ............................... False virtual_pipeline_model_parallel_size ............ None vocab_extra_ids ................................. 0 vocab_file ...................................... wiki/models/megatron_bert_345m_v0.1_uncased/bert-large-uncased-vocab.txt weight_decay .................................... 0.01 world_size ...................................... 1 -------------------- end of arguments --------------------- setting number of micro-batches to constant 1

building BertWordPieceLowerCase tokenizer ... padded vocab (size: 30524) with 68 dummy tokens (new size: 30592) initializing torch distributed ... initializing tensor model parallel with size 1 initializing pipeline model parallel with size 1 setting random seeds to 1234 ... initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 compiling dataset index builder ... make: Entering directory '/raid/ashish/nvirqa/Megatron-LM/megatron/data' make: Nothing to be done for 'default'. make: Leaving directory '/raid/ashish/nvirqa/Megatron-LM/megatron/data'

done with dataset index builder. Compilation time: 0.097 seconds compiling and loading fused kernels ... Detected CUDA files, patching ldflags Emitting ninja build file /raid/ashish/nvirqa/Megatron-LM/megatron/fused_kernels/build/build.ninja... Building extension module scaled_upper_triang_masked_softmax_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module scaled_upper_triang_masked_softmax_cuda... Detected CUDA files, patching ldflags Emitting ninja build file /raid/ashish/nvirqa/Megatron-LM/megatron/fused_kernels/build/build.ninja... Building extension module scaled_masked_softmax_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module scaled_masked_softmax_cuda... Detected CUDA files, patching ldflags Emitting ninja build file /raid/ashish/nvirqa/Megatron-LM/megatron/fused_kernels/build/build.ninja... Building extension module fused_mix_prec_layer_norm_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module fused_mix_prec_layer_norm_cuda... done with compiling and loading fused kernels. Compilation time: 0.402 seconds time to initialize megatron (seconds): 25.511 [after megatron is initialized] datetime: 2021-07-28 04:24:57 building BiEncoderModel... number of parameters on (tensor, pipeline) model parallel rank (0, 0): 217890816 learning rate decay style: linear loading checkpoint from wiki/models/megatron_bert_345m_v0.1_uncased at iteration 0 could not find arguments in the checkpoint ... Loading query model Traceback (most recent call last): File "pretrain_ict.py", line 175, in pretrain(train_valid_test_datasets_provider, File "/raid/ashish/nvirqa/Megatron-LM/megatron/training.py", line 114, in pretrain model, optimizer, lr_scheduler = setup_model_and_optimizer(model_provider) File "/raid/ashish/nvirqa/Megatron-LM/megatron/training.py", line 327, in setup_model_and_optimizer args.iteration = load_checkpoint(model, optimizer, lr_scheduler) File "/raid/ashish/nvirqa/Megatron-LM/megatron/checkpointing.py", line 336, in load_checkpoint model[0].load_state_dict(state_dict['model'], strict=strict) File "/raid/ashish/nvirqa/Megatron-LM/megatron/model/module.py", line 189, in load_state_dict self.module.load_state_dict(state_dict, strict=strict) File "/raid/ashish/nvirqa/Megatron-LM/megatron/model/biencoder_model.py", line 174, in load_state_dict state_dict[self._query_key], strict=strict) KeyError: 'query_model'

github-actions[bot] commented 1 year ago

Marking as stale. No activity in 60 days. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] commented 1 year ago

Marking as stale. No activity in 60 days.