Open SergAnikin opened 10 months ago
Error above is because LlamaCpp(**kwargs) (in localGPT\load_models.py) raise Error:
Could not load Llama model from path: ./models\models--TheBloke--Llama-2-7b-Chat-GGUF\snapshots\ad37d4910ba009a69bb41de44942056d635214ab\llama-2-7b-chat.Q4_K_M. gguf. Received error Failed to load shared library 'C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\llama_cpp\llama.dll': [WinError 1114] A dynamic link li brary (DLL) initialization routine failed (type=value_error)
And Exception return None for llm. Think, that except section must handle this type of error, not only for "if "ggml" in model_basename":
return LlamaCpp(**kwargs)
except:
if "ggml" in model_basename:
logging.INFO("If you were using GGML model, LLAMA-CPP Dropped Support, Use GGUF Instead")
return None
raise
any solution?
Same issue on Windows 10.
It's a fresh install new laptop with python 3.10. I used venv not conda what shouldn't really change anything.
any solution?
I deployed localGPT on Ubuntu 22.04 - that was solution.
This happens to me as well, in my case, pip install llama-cpp-python was the solution since this package is missing from requirements.txt and required for code to run. In another instance I had to install certain version of llama-cpp-python, with latest version, pip install was not working
on Windows (WindowsServer 2019).
Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu
this tip dont help me: pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir (from issues #460, #475)
Error raise on RetrievalQA.from_chain_type(llm=llm, ... )