ile "d:\Bert_pre\GPT_2\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\tokenizations\tokenization_bert.py", line 131, in init
"model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)".format(vocab_file))
ValueError: Can't find a vocabulary file at path 'cache/vocab_small.txt'. To load the vocabulary from a Google pretrained model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
这个报错是什么
ile "d:\Bert_pre\GPT_2\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\tokenizations\tokenization_bert.py", line 131, in init "model use
tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
".format(vocab_file))ValueError: Can't find a vocabulary file at path 'cache/vocab_small.txt'. To load the vocabulary from a Google pretrained model use
tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
这个报错是什么