Traceback (most recent call last):
File "test.py", line 14, in
tokenizer = LlamaTokenizer.from_pretrained(checkpoint)
File "/home/app/model/anaconda3/envs/langchain/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1838, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for '/home/app/model/TransGPT/model/TransGPT-MM-v1'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/app/model/TransGPT/model/TransGPT-MM-v1' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer
加载TransGPT-MM-v1出错,执行代码: import torch import transformers from transformers import LlamaTokenizer, LlamaForCausalLM
def generate_prompt(text): return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
Instruction:
{text}
Response:"""
checkpoint="/home/app/model/TransGPT/model/TransGPT-MM-v1" tokenizer = LlamaTokenizer.from_pretrained(checkpoint) model = LlamaForCausalLM.from_pretrained(checkpoint).half().cuda() model.eval()
text = '我想了解如何申请和更新驾驶证?' prompt = generate_prompt(text) input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
with torch.no_grad(): output_ids = model.generate( input_ids=input_ids, max_new_tokens=1024, temperature=1, top_k=20, top_p=0.9, repetition_penalty=1.15 ).cuda() output = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(output.replace(text, '').strip())
报错信息如下:
Traceback (most recent call last): File "test.py", line 14, in
tokenizer = LlamaTokenizer.from_pretrained(checkpoint)
File "/home/app/model/anaconda3/envs/langchain/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1838, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for '/home/app/model/TransGPT/model/TransGPT-MM-v1'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/app/model/TransGPT/model/TransGPT-MM-v1' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer