huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.41k stars 27.1k forks source link

seq2seq example with T5 does not run due to issue with loading tokernizer #10577

Closed dorost1234 closed 3 years ago

dorost1234 commented 3 years ago

Environment info

Who can help

@patrickvonplaten, @patil-suraj

Information

Hi I am trying to run run_seq2seq.py example on mt5 model

python run_seq2seq.py --model_name_or_path google/mt5-small --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --tokenizer_name google/mt5-small

getting this error:

Traceback (most recent call last):
  File "run_seq2seq.py", line 539, in <module>
    main()
  File "run_seq2seq.py", line 309, in main
    use_auth_token=True if model_args.use_auth_token else None,
  File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 379, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1789, in from_pretrained
    resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
  File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 147, in __init__
    **kwargs,
  File "/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 103, in __init__
    "Couldn't instantiate the backend tokenizer from one of: "
ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

thank you for your help

dorost1234 commented 3 years ago

solved with installing sentencepiece, I appreciate adding a file mentioning requirements.txt thanks

patil-suraj commented 3 years ago

Hi @dorost1234 , Glad you resolved the issue. Your transformers version is old, we have now added the sentencepiece dependency in requirements.txt. https://github.com/huggingface/transformers/blob/master/examples/seq2seq/requirements.txt#L2