OpenNMT / OpenNMT-py

Open Source Neural Machine Translation and (Large) Language Models in PyTorch
https://opennmt.net/
MIT License
6.73k stars 2.25k forks source link

Error evaluating LM-prior checkpoint: #2541

Closed anthdr closed 9 months ago

anthdr commented 9 months ago

I ran a LM-prior training using LLAMA2 (in the official docker image). Loading the model for inference returns this error:

Traceback (most recent call last):
  File "/some_path/LM/toy-lina-LLM/LLM_prior_runs/script_eval.py", line 43, in <module>
    main()
  File "/some_path/LM/toy-lina-LLM/LLM_prior_runs/script_eval.py", line 36, in main
    engine = InferenceEnginePY(opt)
  File "/opennmt-py/onmt/inference_engine.py", line 116, in __init__
    self.translator = build_translator(
  File "/opennmt-py/onmt/translate/translator.py", line 33, in build_translator
    vocabs, model, model_opt = load_test_model(opt, device_id)
  File "/opennmt-py/onmt/model_builder.py", line 164, in load_test_model
    model.load_state_dict(
  File "/opennmt-py/onmt/models/model.py", line 152, in load_state_dict
    self._load_param(
  File "/opennmt-py/onmt/models/model.py", line 73, in _load_param
    param.data.size()
AssertionError: An error in model's partition and checkpoint's slice was detected

https://github.com/OpenNMT/OpenNMT-py/blob/0436cdd0915534f84d3e8783f3e4193c64eb44d9/onmt/models/model.py#L72-L77

Printing the two elements of the assertion I have this:

torch.Size([32000, 1024])
torch.Size([32000, 1024])

torch.Size([1024, 1024])
torch.Size([1024, 1024])

torch.Size([1024, 1024])
torch.Size([1024, 1024])

torch.Size([1024, 1024])
torch.Size([1024, 1024])

torch.Size([1024, 1024])
torch.Size([1024, 1024])

torch.Size([41, 64])
torch.Size([41, 64])

torch.Size([4096, 1024])
torch.Size([4096, 1])

Here is the config used for the training:

_multilingual: false
accum_count:
- 2
- 4
- 8
accum_steps:
- 0
- 15000
- 30000
adam_beta2: 0.998
attention_dropout:
- 0.1
average_decay: 0.0005
# batch_size: 2048
# batch_type: tokens
batch_size: 1
batch_type: sents
bucket_size: 150000
bucket_size_increment: 10000
bucket_size_init: 150000
case_strategy: pos
data:
  CC-matrix.en-fr_en-fr:
    id: CC-matrix.en-fr_en-fr
    path_src: /some_path/CC-matrix.en-fr.en
    path_tgt: /some_path/CC-matrix.en-fr.fr
    source: en
    src_lang: en
    target: fr
    tgt_lang: fr
    weight: 100
dec_layers: 2
decay_method: noam
decoder_type: transformer
dropout:
- 0.1
dropout_steps:
- 0
dump_transforms: false
enc_layers: 12
encoder_type: transformer
gpu_ranks:
- 0
heads: 16
hidden_size: 1024
keep_checkpoint: 50
label_smoothing: 0.1
learning_rate: 2
lemmatization_strategy: pos
lemmatize_prob: 50
lm_prior_lambda: 0.5
lm_prior_model: /some_path/checkpoints/llama-2-7B_safetensors.pt
lm_prior_tau: 2
lowercase_prob: 50
max_grad_norm: 0
max_relative_positions: 20
model_dtype: fp16
n_sample: 0
num_workers: 2
only_term: 20
optim: fusedadam
overwrite: true
param_init: 0
param_init_glorot: true
prefetch_factor: 5000
report_every: 50
save_checkpoint_steps: 5000
save_data: /some_path/05_train/onmt_run
save_model: /some_path/05_train/v3_model
seed: 2345
share_decoder_embeddings: true
share_embeddings: true
share_vocab: true
skip_empty_level: silent
src_onmttok_kwargs: '{''mode'': ''none'', ''preserve_placeholders'': True}'
src_seq_length: 200
src_subword_model: /some_path/subwords/tokenizer.spm.model
src_subword_type: sentencepiece
src_vocab: /some_path/vocab/llama-2-7B_vocab.txt
src_words_min_frequency: 5
tensorboard: true
tensorboard_log_dir: /some_path/05_train/log
tgt_onmttok_kwargs: '{''mode'': ''none'', ''preserve_placeholders'': True}'
tgt_seq_length: 200
tgt_subword_model: /some_path/subwords/tokenizer.spm.model
tgt_subword_type: sentencepiece
tgt_words_min_frequency: 5
train_steps: 100050
transformer_ff: 4096
transforms:
- onmt_tokenize
- filtertoolong
# valid_batch_size: 2048
valid_batch_size: 1
valid_metrics:
- BLEU
- TER
valid_steps: 5000
warmup_steps: 6000
word_vec_size: 1024
world_size: 1

quant_layers: ['w_1', 'w_2', 'w_3'] #,]'linear_values', 'linear_query', 'linear_keys', 'final_linear']
quant_type: "bnb_NF4"

(modified some paths to make it more readable as it's very unlikely a path error)

l-k-11235 commented 9 months ago

The docker image for this experiment is ghcr.io/opennmt/opennmt-py:3.4.3-ubuntu22.04-cuda11.8 but the PYTHONPATH refers to the commit 0436cdd0915534f84d3e8783f3e4193c64eb44d9. We encountered an error at the loading of a checkpoint when trying to infer from a file with the inference engine (so the issue is not specifically related to LM prior).

from onmt.inference_engine import InferenceEnginePY

import json
import time
import onmt.opts as opts
from onmt.utils.parse import ArgumentParser
from onmt.utils.misc import use_gpu, set_random_seed

def _get_parser():
    parser = ArgumentParser(description="simple_inference_engine_py.py")
    opts.config_opts(parser)
    opts.translate_opts(parser, dynamic=True)
    return parser

def main():

  # Required arguments
  parser = ArgumentParser()
  parser.add_argument(
    "-inference_config_file", help="Inference config file", required=True, type=str
  )
  args = parser.parse_args()
  inference_config_file = args.inference_config_file
  base_args = ["-config", inference_config_file]

  parser = _get_parser()
  opt = parser.parse_args(base_args)
  ArgumentParser.validate_translate_opts(opt)
  ArgumentParser._get_all_transform_translate(opt)
  ArgumentParser._validate_transforms_opts(opt)
  ArgumentParser.validate_translate_opts_dynamic(opt)
  set_random_seed(opt.seed, use_gpu(opt))

  opt.model_task = 'seq2seq'
  engine = InferenceEnginePY(opt)
  scores, preds = engine.infer_file()
  print(list(zip(scores, preds)))
  engine.terminate()

if __name__ == "__main__":
    main()