zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks
https://privategpt.dev
Apache License 2.0
53.95k stars 7.25k forks source link

Llama object has no attribute as ctx #986

Closed azxan2009 closed 8 months ago

azxan2009 commented 1 year ago

Note: if you'd like to ask a question or open a discussion, head over to the Discussions section and post it there.

Describe the bug and how to reproduce it A clear and concise description of what the bug is and the steps to reproduce the behavior.

Expected behavior A clear and concise description of what you expected to happen.

Environment (please complete the following information):

Additional context Add any other context about the problem here.

Error: llama.cpp: loading model from models/ggml-model-q4_0.bin error loading model: unexpectedly reached end of file llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/content/localgpt/privateGPT.py", line 83, in main() File "/content/localgpt/privateGPT.py", line 36, in main llm = LlamaCpp(model_path=model_path, max_tokens=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks, verbose=False) File "/usr/local/lib/python3.10/dist-packages/langchain/load/serializable.py", line 74, in init super().init(**kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp root Could not load Llama model from path: models/ggml-model-q4_0.bin. Received error (type=value_error) Exception ignored in: <function Llama.del at 0x7dd38b822950> Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py", line 1445, in del if self.ctx is not None: AttributeError: 'Llama' object has no attribute 'ctx'

My Env file:

PERSIST_DIRECTORY=db MODEL_TYPE=LlamaCpp MODEL_PATH=models/ggml-model-q4_0.bin EMBEDDINGS_MODEL_NAME=paraphrase-TinyBERT-L6-v2 MODEL_N_CTX=1000 MODEL_N_BATCH=8 TARGET_SOURCE_CHUNKS=4

imartinez commented 1 year ago

@azxan2009 make sure the path to the file is correct. Have you tried other models? Give a try to a GPT4All model type

PhucHoang998 commented 1 year ago

Very

azxan2009 commented 1 year ago

Gpt4all works fine. When I tried it with llamacpt it throws that error. Will it not work with any other embedding?

edwinyoo44 commented 1 year ago

Gpt4all works fine. When I tried it with llamacpt it throws that error. Will it not work with any other embedding?

Try using an older version of .llama.cpp conversion

azxan2009 commented 1 year ago

i could not find the llama.cpp old version to download, can you please post a link here