I really appreciate the fantastic job done the authors in the project.
When I tried to interact with the LLaMA, I got the following errors:
Traceback (most recent call last):
File "exp.py", line 7, in <module>
model = llama.load("/home/qw/proj/ckpt/llama2/llama-2-7b-chat", adapter_ckpt='/home/qw/proj/Point-Bind_Point-LLM/ckpt/7B.pth', knn=True)
File "/home/qw/proj/Point-Bind_Point-LLM/Point-LLM/llama/llama_adapter.py", line 339, in load
model = LLaMA_adapter(
File "/home/qw/proj/Point-Bind_Point-LLM/Point-LLM/llama/llama_adapter.py", line 53, in __init__
self.tokenizer = Tokenizer(model_path=llama_tokenizer)
File "/home/qw/proj/Point-Bind_Point-LLM/Point-LLM/llama/tokenizer.py", line 17, in __init__
self.sp_model = SentencePieceProcessor(model_file=model_path)
File "/home/qw/.conda/envs/pb/lib/python3.8/site-packages/sentencepiece/__init__.py", line 447, in Init
self.Load(model_file=model_file, model_proto=model_proto)
File "/home/qw/.conda/envs/pb/lib/python3.8/site-packages/sentencepiece/__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
File "/home/qw/.conda/envs/pb/lib/python3.8/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
I would be grateful if someone could help me with this problem.
In addition, I think it would be great if authors could also specify the desired version of required packages (e.g., pytorch, torchaudio, torchvision and cudatoolkit, etc).
I really appreciate the fantastic job done the authors in the project.
When I tried to interact with the LLaMA, I got the following errors:
I would be grateful if someone could help me with this problem.
In addition, I think it would be great if authors could also specify the desired version of required packages (e.g., pytorch, torchaudio, torchvision and cudatoolkit, etc).