PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
19.66k stars 2.2k forks source link

none is not an allowed value (type=type_error.none.not_allowed) #525

Open SergAnikin opened 10 months ago

SergAnikin commented 10 months ago

on Windows (WindowsServer 2019).

Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu

this tip dont help me: pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir (from issues #460, #475)

Error raise on RetrievalQA.from_chain_type(llm=llm, ... )

python run_localGPT.py 2023-09-24 22:56:15,833 - INFO - run_localGPT.py:225 - Running on: cpu 2023-09-24 22:56:15,833 - INFO - run_localGPT.py:226 - Display Source Documents set to: False 2023-09-24 22:56:15,833 - INFO - run_localGPT.py:227 - Use history set to: False 2023-09-24 22:56:16,654 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer max_seq_length 512 2023-09-24 22:56:21,005 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information. 2023-09-24 22:56:21,149 - INFO - run_localGPT.py:56 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu 2023-09-24 22:56:21,149 - INFO - run_localGPT.py:57 - This action can take a few minutes! 2023-09-24 22:56:21,149 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models Traceback (most recent call last): File "C:\PythonScripts\localGPT\run_localGPT.py", line 262, in main() File "C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, kwargs) File "C:\PythonScripts\localGPT\run_localGPT.py", line 233, in main qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type="llama") File "C:\PythonScripts\localGPT\run_localGPT.py", line 148, in retrieval_qa_pipline qa = RetrievalQA.from_chain_type( File "C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\langchain\chains\retrieval_qa\base.py", line 100, in from_chain_type combine_documents_chain = load_qa_chain( File "C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\langchain\chains\question_answering__init__.py", line 249, in load_qa_chain return loader_mapping[chain_type]( File "C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\langchain\chains\question_answering__init.py", line 73, in _load_stuff_chain llm_chain = LLMChain( File "C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\langchain\load\serializable.py", line 74, in init super().init__(kwargs) File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain llm none is not an allowed value (type=type_error.none.not_allowed)

SergAnikin commented 10 months ago

Error above is because LlamaCpp(**kwargs) (in localGPT\load_models.py) raise Error:

Could not load Llama model from path: ./models\models--TheBloke--Llama-2-7b-Chat-GGUF\snapshots\ad37d4910ba009a69bb41de44942056d635214ab\llama-2-7b-chat.Q4_K_M. gguf. Received error Failed to load shared library 'C:\ProgramData\Anaconda3\envs\localGPT\lib\site-packages\llama_cpp\llama.dll': [WinError 1114] A dynamic link li brary (DLL) initialization routine failed (type=value_error)

And Exception return None for llm. Think, that except section must handle this type of error, not only for "if "ggml" in model_basename":

    return LlamaCpp(**kwargs)
except:
    if "ggml" in model_basename:
        logging.INFO("If you were using GGML model, LLAMA-CPP Dropped Support, Use GGUF Instead")
        return None
    raise
WiktorLigeza commented 10 months ago

any solution?

domik82 commented 10 months ago

Same issue on Windows 10.

It's a fresh install new laptop with python 3.10. I used venv not conda what shouldn't really change anything.

SergAnikin commented 9 months ago

any solution?

I deployed localGPT on Ubuntu 22.04 - that was solution.

adnanrizve commented 9 months ago

This happens to me as well, in my case, pip install llama-cpp-python was the solution since this package is missing from requirements.txt and required for code to run. In another instance I had to install certain version of llama-cpp-python, with latest version, pip install was not working