Open tiagorangel1 opened 1 year ago
Yes I am having the same issue.
Downloading (…)lve/main/config.json: 100% 427/427 [00:00<00:00, 60.5kB/s]
Loading model ...
Done.
Downloading (…)okenizer_config.json: 100% 141/141 [00:00<00:00, 60.3kB/s]
Traceback (most recent call last):
File "/content/GPTQ-for-LLaMa/llama_inference.py", line 114, in
Same issue:
Loading model ...
Done.
Traceback (most recent call last):
File "/content/GPTQ-for-LLaMa/llama_inference.py", line 114, in
I created a fix to solve the problem. @amrrs please accept and merge.
Fix: in requirements.py, change the last line to git+https://github.com/zphang/transformers@660dd6e2bbc9255aacd0e60084cf15df1b6ae00d#egg=transformers
Ok I followed instructions, still getting this error: Traceback (most recent call last):
File "/content/GPTQ-for-LLaMa/llama_inference.py", line 108, in
@Tylersuard I merged your PR, does it fix your problem?
I have the same issue. Followed all insructions
I am getting this error: