afaqueumer / DocQA

Question Answering with Custom FIles using LLMs
285 stars 84 forks source link

Could not load Llama model from path #2

Open xmagcx opened 1 year ago

xmagcx commented 1 year ago

52, in _run_script exec(code, module.dict) File "C:\Users\mauri\Downloads\DocQA-main\DocQA-main\app.py", line 42, in llm = LlamaCpp(model_path="./models/llama-7b.ggmlv3.q4_0.bin") File "C:\Users\mauri\Downloads\DocQA-main\DocQA-main\venv\lib\site-packages\langchain\load\serializable.py", line 74, in init
super().init(**kwargs) File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp root Could not load Llama model from path: ./models/llama-7b.ggmlv3.q4_0.bin. Received error Model path does not exist: ./models/llama-7b.ggmlv3.q4_0.bin (type=value_error)

What version of python are you using?

afaqueumer commented 1 year ago

I guess you need to edit the path or place the model in the same directory. This is a path error it was hard coded.

unkrejativ commented 1 year ago

Hey @xmagcx , could you solve the problem?

six-finger commented 7 months ago

Replace pipenv by python -m