Google-RE
{'dataset_filename': '../data/LAMA/Google_RE/place_of_birth_test.jsonl', 'common_vocab_filename': '../data/LAMA/common_vocab_cased.txt', 'template': '[X] was born in [Y] .', 'batch_size': 64, 'max_sentence_length': 100, 'threads': -1, 'model_path': '../model/'}
Traceback (most recent call last):
File "eval_lama.py", line 114, in
eval_model(parameters)
File "eval_lama.py", line 83, in eval_model
model = Roberta(args)
File "/data/zqwang/KnowledgeGraph/CoLAKE/lama/model.py", line 14, in init
self.tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
File "/data/zqwang/anaconda3/envs/colake/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 911, in from_pretrained
return cls._from_pretrained(inputs, **kwargs)
File "/data/zqwang/anaconda3/envs/colake/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 1007, in _from_pretrained
raise EnvironmentError(
OSError: Model name 'roberta-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'roberta-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
(colake) [zqwang@localhost lama]$ python eval_lama.py