ymcui / Chinese-BERT-wwm

Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
https://ieeexplore.ieee.org/document/9599397
Apache License 2.0
9.57k stars 1.38k forks source link

Model name 'hfl/chinese-roberta-wwm-ext' not found in model shortcut name list #130

Closed fathouse closed 4 years ago

fathouse commented 4 years ago

我使用以下代码 `from transformers import BertTokenizer, BertModel

bert = BertModel.from_pretrained("hfl/chinese-roberta-wwm-ext")

bert_tokenizer = BertTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")` 却显示

loading weights file https://cdn.huggingface.co/hfl/chinese-roberta-wwm-ext/pytorch_model.bin from cache at C:\Users\bcc/.cache\torch\transformers\47d2326d47246cef3121d70d592c0391a4ed594b04ce3dea8bd47edd37e20370.6ac27309c356295f0e005c6029fce503ec6a32853911ebf79f8bddd8dd10edad
All model checkpoint weights were used when initializing BertModel.

All the weights of BertModel were initialized from the model checkpoint at hfl/chinese-roberta-wwm-ext.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use BertModel for predictions without further training.
Model name 'hfl/chinese-roberta-wwm-ext' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). Assuming 'hfl/chinese-roberta-wwm-ext' is a path, a model identifier, or url to a directory containing tokenizer files.
loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/vocab.txt from cache at C:\Users\bcc/.cache\torch\transformers\5593eb652e3fb9a17042385245a61389ce6f0c8a25e167519477d7efbdf2459a.9b42061518a39ca00b8b52059fd2bede8daa613f8a8671500e518a8c29de8c00
loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/added_tokens.json from cache at C:\Users\bcc/.cache\torch\transformers\23740a16768d945f44a24590dc8f5e572773b1b2868c5e58f7ff4fae2a721c49.3889713104075cfee9e96090bcdd0dc753733b3db9da20d1dd8b2cd1030536a2
loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/special_tokens_map.json from cache at C:\Users\bcc/.cache\torch\transformers\6f13f9fe28f96dd7be36b84708332115ef90b3b310918502c13a8f719a225de2.275045728fbf41c11d3dae08b8742c054377e18d92cc7b72b6351152a99b64e4
loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/tokenizer_config.json from cache at C:\Users\bcc/.cache\torch\transformers\5bb5761fdb6c8f42bf7705c27c48cffd8b40afa8278fa035bc81bf288f108af9.1ade4e0ac224a06d83f2cb9821a6656b6b59974d6552e8c728f2657e4ba445d9
loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/tokenizer.json from cache at None
Traceback (most recent call last):
  File "D:/pycharm/train.py", line 290, in <module>
    single_train()
  File "D:/pycharm/train.py", line 284, in single_train
    ins = Instructor(opt)
  File "D:/pycharm/train.py", line 33, in __init__
    tokenizer = Tokenizer4Bert(bert_tokenizer, opt.max_seq_len)
  File "D:\pycharm\data_utils.py", line 145, in __init__
    self.tokenizer = BertTokenizer.from_pretrained(pretrained_bert_name)
  File "D:\anaconda\envs\tf-gpu\lib\site-packages\transformers\tokenization_utils_base.py", line 1140, in from_pretrained
    return cls._from_pretrained(*inputs, **kwargs)
  File "D:\anaconda\envs\tf-gpu\lib\site-packages\transformers\tokenization_utils_base.py", line 1171, in _from_pretrained
    if os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
  File "D:\anaconda\envs\tf-gpu\lib\site-packages\transformers\file_utils.py", line 447, in is_remote_url
    parsed = urlparse(url_or_filename)
  File "D:\anaconda\envs\tf-gpu\lib\urllib\parse.py", line 367, in urlparse
    url, scheme, _coerce_result = _coerce_args(url, scheme)
  File "D:\anaconda\envs\tf-gpu\lib\urllib\parse.py", line 123, in _coerce_args
    return _decode_args(args) + (_encode_result,)
  File "D:\anaconda\envs\tf-gpu\lib\urllib\parse.py", line 107, in _decode_args
    return tuple(x.decode(encoding, errors) if x else '' for x in args)
  File "D:\anaconda\envs\tf-gpu\lib\urllib\parse.py", line 107, in <genexpr>
    return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'int' object has no attribute 'decode'
Model name '80' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). Assuming '80' is a path, a model identifier, or url to a directory containing tokenizer files.

Process finished with exit code 1

请问您知道为什么吗,网络没有问题,可以在线下载模型,但是下载完了依旧显示找不到模型 `transformers==2.9.0/3.0.2``都尝试过

fathouse commented 4 years ago

是否存在老版本pytorch_transformers与新版本transformers的兼容问题

ymcui commented 4 years ago

首先,你的模型应该是正确加载的了我们的模型,因为显示

All the weights of BertModel were initialized from the model checkpoint at hfl/chinese-roberta-wwm-ext.

另外,我这边在torch 1.5.0 + transformers 3.0.2 也测试通过了(linux系统)。

>>> from transformers import BertTokenizer, BertModel
>>> bert_tokenizer = BertTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")
>>> bert = BertModel.from_pretrained("hfl/chinese-roberta-wwm-ext")

最后,并不是pytorch_transformers和新版transformers的问题,因为旧版是不支持快速加载的。 另外看到你这个是windows系统,我不是很清楚是不是因为OS的问题导致的。 如果你加载其他人发布的模型也是这个问题的话,建议你去transformers的issue中进行提问。

fathouse commented 4 years ago

感谢您的回答,我在加载基线模型如bert-base-chinese,bert-base-uncased等模型是没有问题的,那看来是我的代码或者设置中出现了问题。