PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.09k stars 2.24k forks source link

pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain llm none is not an allowed value (type=type_error.none.not_allowed) #535

Open varungoti opened 1 year ago

varungoti commented 1 year ago

When i run localGPT.py i get the below error:

2023-09-27 14:49:29,036 - INFO - run_localGPT.py:221 - Running on: cuda
2023-09-27 14:49:29,036 - INFO - run_localGPT.py:222 - Display Source Documents set to: False
2023-09-27 14:49:29,036 - INFO - run_localGPT.py:223 - Use history set to: False
2023-09-27 14:49:29,316 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length  512
2023-09-27 14:49:32,007 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2023-09-27 14:49:32,066 - INFO - run_localGPT.py:56 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cuda
2023-09-27 14:49:32,066 - INFO - run_localGPT.py:57 - This action can take a few minutes!
2023-09-27 14:49:32,066 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
Traceback (most recent call last):
  File "/home/admin1/Documents/chatgpt/localgpt_llama2/run_localGPT.py", line 258, in <module>
    main()
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/home/admin1/Documents/chatgpt/localgpt_llama2/run_localGPT.py", line 229, in main
    qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type="llama")
  File "/home/admin1/Documents/chatgpt/localgpt_llama2/run_localGPT.py", line 144, in retrieval_qa_pipline
    qa = RetrievalQA.from_chain_type(
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 100, in from_chain_type
    combine_documents_chain = load_qa_chain(
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 249, in load_qa_chain
    return loader_mapping[chain_type](
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 73, in _load_stuff_chain
    llm_chain = LLMChain(
  File "/home/admin1/anaconda3/envs/localgpt_llama2/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in __init__
    super().__init__(**kwargs)
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
  **none is not an allowed value (type=type_error.none.not_allowed)**

Has anyone encountered this error. If yes any workaround for the same?

Zephyruswind commented 1 year ago

Got the same error just now

FeRm00 commented 1 year ago

It´s the some error from this post:

https://github.com/PromtEngineer/localGPT/issues/501

Try:

set CMAKE_ARGS=-DLLAMA_CUBLAS=on

set FORCE_CMAKE=1

And now, if you are using a GGUF language model then:

pip install llama-cpp-python==0.1.83

If you using a GGML:

pip install llama-cpp-python==0.1.76

In constants.py is the model you are using. The default is:

MODEL_ID = "TheBloke/Llama-2-7b-Chat-GGUF" MODEL_BASENAME = "llama-2-7b-chat.Q4_K_M.gguf"

Pradeep987654321 commented 1 year ago

if you are using in windows environment use pip install llama-cpp-python==0.1.83 for GGUF models.

BigBoom123 commented 1 year ago

Got the same error just now

the same (

BigBoom123 commented 1 year ago

(localgpt) C:\06 projects\localCPT\localGPT>python run_localGPT.py --device_type cpu 2023-09-29 21:56:08,723 - INFO - run_localGPT.py:221 - Running on: cpu 2023-09-29 21:56:08,723 - INFO - run_localGPT.py:222 - Display Source Documents set to: False 2023-09-29 21:56:08,723 - INFO - run_localGPT.py:223 - Use history set to: False 2023-09-29 21:56:23,688 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer max_seq_length 512 2023-09-29 22:10:05,632 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information. 2023-09-29 22:10:24,673 - INFO - run_localGPT.py:56 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu 2023-09-29 22:10:24,673 - INFO - run_localGPT.py:57 - This action can take a few minutes! 2023-09-29 22:10:24,673 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models Traceback (most recent call last): File "C:\06 projects\localCPT\localGPT\run_localGPT.py", line 258, in main() File "C:\Users\naidenov\anaconda3\envs\localgpt\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\Users\naidenov\anaconda3\envs\localgpt\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\naidenov\anaconda3\envs\localgpt\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\naidenov\anaconda3\envs\localgpt\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, kwargs) File "C:\06 projects\localCPT\localGPT\run_localGPT.py", line 229, in main qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type="llama") File "C:\06 projects\localCPT\localGPT\run_localGPT.py", line 144, in retrieval_qa_pipline qa = RetrievalQA.from_chain_type( File "C:\Users\naidenov\anaconda3\envs\localgpt\lib\site-packages\langchain\chains\retrieval_qa\base.py", line 100, in from_chain_type combine_documents_chain = load_qa_chain( File "C:\Users\naidenov\anaconda3\envs\localgpt\lib\site-packages\langchain\chains\question_answering__init__.py", line 249, in load_qa_chain return loader_mapping[chain_type]( File "C:\Users\naidenov\anaconda3\envs\localgpt\lib\site-packages\langchain\chains\question_answering__init.py", line 73, in _load_stuff_chain llm_chain = LLMChain( File "C:\Users\naidenov\anaconda3\envs\localgpt\lib\site-packages\langchain\load\serializable.py", line 74, in init super().init__(kwargs) File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain llm none is not an allowed value (type=type_error.none.not_allowed)

BigBoom123 commented 1 year ago

packages in environment:

Name Version Build Channel

accelerate 0.23.0 pypi_0 pypi aiohttp 3.8.5 pypi_0 pypi aiosignal 1.3.1 pypi_0 pypi altair 5.1.1 pypi_0 pypi anyio 4.0.0 pypi_0 pypi async-timeout 4.0.3 pypi_0 pypi attrs 23.1.0 pypi_0 pypi auto-gptq 0.2.2 pypi_0 pypi backoff 2.2.1 pypi_0 pypi beautifulsoup4 4.12.2 pypi_0 pypi bitsandbytes-windows 0.37.5 pypi_0 pypi blinker 1.6.2 pypi_0 pypi bzip2 1.0.8 he774522_0 ca-certificates 2023.08.22 haa95532_0 cachetools 5.3.1 pypi_0 pypi certifi 2023.7.22 pypi_0 pypi cffi 1.15.1 pypi_0 pypi chardet 5.2.0 pypi_0 pypi charset-normalizer 3.2.0 pypi_0 pypi chroma-hnswlib 0.7.2 pypi_0 pypi chromadb 0.4.6 pypi_0 pypi click 8.1.7 pypi_0 pypi colorama 0.4.6 pypi_0 pypi coloredlogs 15.0.1 pypi_0 pypi contourpy 1.1.1 pypi_0 pypi cryptography 41.0.4 pypi_0 pypi cycler 0.11.0 pypi_0 pypi dataclasses-json 0.5.14 pypi_0 pypi datasets 2.14.5 pypi_0 pypi dill 0.3.7 pypi_0 pypi diskcache 5.6.3 pypi_0 pypi docx2txt 0.8 pypi_0 pypi emoji 2.8.0 pypi_0 pypi et-xmlfile 1.1.0 pypi_0 pypi exceptiongroup 1.1.3 pypi_0 pypi faiss-cpu 1.7.4 pypi_0 pypi faker 19.6.2 pypi_0 pypi fastapi 0.99.1 pypi_0 pypi favicon 0.7.0 pypi_0 pypi filelock 3.12.4 pypi_0 pypi filetype 1.2.0 pypi_0 pypi flask 2.3.3 pypi_0 pypi flatbuffers 23.5.26 pypi_0 pypi fonttools 4.42.1 pypi_0 pypi frozenlist 1.4.0 pypi_0 pypi fsspec 2023.6.0 pypi_0 pypi gitdb 4.0.10 pypi_0 pypi gitpython 3.1.37 pypi_0 pypi greenlet 2.0.2 pypi_0 pypi h11 0.14.0 pypi_0 pypi htbuilder 0.6.2 pypi_0 pypi httptools 0.6.0 pypi_0 pypi huggingface-hub 0.17.3 pypi_0 pypi humanfriendly 10.0 pypi_0 pypi idna 3.4 pypi_0 pypi importlib-metadata 6.8.0 pypi_0 pypi importlib-resources 6.1.0 pypi_0 pypi instructorembedding 1.0.1 pypi_0 pypi itsdangerous 2.1.2 pypi_0 pypi jinja2 3.1.2 pypi_0 pypi joblib 1.3.2 pypi_0 pypi jsonschema 4.19.1 pypi_0 pypi jsonschema-specifications 2023.7.1 pypi_0 pypi kiwisolver 1.4.5 pypi_0 pypi langchain 0.0.267 pypi_0 pypi langsmith 0.0.41 pypi_0 pypi libffi 3.4.4 hd77b12b_0 llama-cpp-python 0.1.76 pypi_0 pypi lxml 4.9.3 pypi_0 pypi markdown 3.4.4 pypi_0 pypi markdown-it-py 3.0.0 pypi_0 pypi markdownlit 0.0.7 pypi_0 pypi markupsafe 2.1.3 pypi_0 pypi marshmallow 3.20.1 pypi_0 pypi matplotlib 3.8.0 pypi_0 pypi mdurl 0.1.2 pypi_0 pypi monotonic 1.6 pypi_0 pypi more-itertools 10.1.0 pypi_0 pypi mpmath 1.3.0 pypi_0 pypi multidict 6.0.4 pypi_0 pypi multiprocess 0.70.15 pypi_0 pypi mypy-extensions 1.0.0 pypi_0 pypi networkx 3.1 pypi_0 pypi nltk 3.8.1 pypi_0 pypi numexpr 2.8.7 pypi_0 pypi numpy 1.26.0 pypi_0 pypi onnxruntime 1.16.0 pypi_0 pypi openapi-schema-pydantic 1.2.4 pypi_0 pypi openpyxl 3.1.2 pypi_0 pypi openssl 1.1.1w h2bbff1b_0 overrides 7.4.0 pypi_0 pypi packaging 23.1 pypi_0 pypi pandas 2.1.1 pypi_0 pypi pdfminer-six 20221105 pypi_0 pypi pillow 9.5.0 pypi_0 pypi pip 23.2.1 py310haa95532_0 posthog 3.0.2 pypi_0 pypi protobuf 3.20.0 pypi_0 pypi psutil 5.9.5 pypi_0 pypi pulsar-client 3.3.0 pypi_0 pypi pyarrow 13.0.0 pypi_0 pypi pycparser 2.21 pypi_0 pypi pydantic 1.10.12 pypi_0 pypi pydeck 0.8.1b0 pypi_0 pypi pygments 2.16.1 pypi_0 pypi pymdown-extensions 10.3 pypi_0 pypi pyparsing 3.1.1 pypi_0 pypi pypika 0.48.9 pypi_0 pypi pyreadline3 3.4.1 pypi_0 pypi python 3.10.0 h96c0403_3 python-dateutil 2.8.2 pypi_0 pypi python-dotenv 1.0.0 pypi_0 pypi python-iso639 2023.6.15 pypi_0 pypi python-magic 0.4.27 pypi_0 pypi pytz 2023.3.post1 pypi_0 pypi pyyaml 6.0.1 pypi_0 pypi referencing 0.30.2 pypi_0 pypi regex 2023.8.8 pypi_0 pypi requests 2.31.0 pypi_0 pypi rich 13.5.3 pypi_0 pypi rouge 1.0.1 pypi_0 pypi rpds-py 0.10.3 pypi_0 pypi safetensors 0.3.3 pypi_0 pypi scikit-learn 1.3.1 pypi_0 pypi scipy 1.11.2 pypi_0 pypi sentence-transformers 2.2.2 pypi_0 pypi sentencepiece 0.1.99 pypi_0 pypi setuptools 68.0.0 py310haa95532_0 six 1.16.0 pypi_0 pypi smmap 5.0.1 pypi_0 pypi sniffio 1.3.0 pypi_0 pypi soupsieve 2.5 pypi_0 pypi sqlalchemy 2.0.21 pypi_0 pypi sqlite 3.41.2 h2bbff1b_0 st-annotated-text 4.0.1 pypi_0 pypi starlette 0.27.0 pypi_0 pypi streamlit 1.27.0 pypi_0 pypi streamlit-camera-input-live 0.2.0 pypi_0 pypi streamlit-card 0.0.61 pypi_0 pypi streamlit-embedcode 0.1.2 pypi_0 pypi streamlit-extras 0.3.2 pypi_0 pypi streamlit-faker 0.0.2 pypi_0 pypi streamlit-image-coordinates 0.1.6 pypi_0 pypi streamlit-keyup 0.2.0 pypi_0 pypi streamlit-toggle-switch 1.0.2 pypi_0 pypi streamlit-vertical-slider 1.0.2 pypi_0 pypi sympy 1.12 pypi_0 pypi tabulate 0.9.0 pypi_0 pypi tenacity 8.2.3 pypi_0 pypi threadpoolctl 3.2.0 pypi_0 pypi tk 8.6.12 h2bbff1b_0 tokenizers 0.13.3 pypi_0 pypi toml 0.10.2 pypi_0 pypi toolz 0.12.0 pypi_0 pypi torch 2.0.1 pypi_0 pypi torchvision 0.15.2 pypi_0 pypi tornado 6.3.3 pypi_0 pypi tqdm 4.66.1 pypi_0 pypi transformers 4.33.2 pypi_0 pypi typing-extensions 4.8.0 pypi_0 pypi typing-inspect 0.9.0 pypi_0 pypi tzdata 2023.3 pypi_0 pypi tzlocal 5.0.1 pypi_0 pypi unstructured 0.10.16 pypi_0 pypi urllib3 1.26.6 pypi_0 pypi uvicorn 0.23.2 pypi_0 pypi validators 0.22.0 pypi_0 pypi vc 14.2 h21ff451_1 vs2015_runtime 14.27.29016 h5e58377_2 watchdog 3.0.0 pypi_0 pypi watchfiles 0.20.0 pypi_0 pypi websockets 11.0.3 pypi_0 pypi werkzeug 2.3.7 pypi_0 pypi wheel 0.41.2 py310haa95532_0 xxhash 3.3.0 pypi_0 pypi xz 5.4.2 h8cc25b3_0 yarl 1.9.2 pypi_0 pypi zipp 3.17.0 pypi_0 pypi zlib 1.2.13 h8cc25b3_0

BigBoom123 commented 1 year ago

It´s the some error from this post:

501

Try:

set CMAKE_ARGS=-DLLAMA_CUBLAS=on

set FORCE_CMAKE=1

And now, if you are using a GGUF language model then:

pip install llama-cpp-python==0.1.83

If you using a GGML:

pip install llama-cpp-python==0.1.76

In constants.py is the model you are using. The default is:

MODEL_ID = "TheBloke/Llama-2-7b-Chat-GGUF" MODEL_BASENAME = "llama-2-7b-chat.Q4_K_M.gguf"

didn't help in my case

Metassive commented 1 year ago

It´s the some error from this post:

501

Try:

set CMAKE_ARGS=-DLLAMA_CUBLAS=on

set FORCE_CMAKE=1

And now, if you are using a GGUF language model then:

pip install llama-cpp-python==0.1.83

If you using a GGML:

pip install llama-cpp-python==0.1.76

In constants.py is the model you are using. The default is:

MODEL_ID = "TheBloke/Llama-2-7b-Chat-GGUF" MODEL_BASENAME = "llama-2-7b-chat.Q4_K_M.gguf"

Is it possible that I need to add some language model or modify certain code parameters in the run_localGPT.py file?

I have tried with:

set CMAKE_ARGS=-DLLAMA_CUBLAS=on set FORCE_CMAKE=1

After with:

pip install llama-cpp-python pip install llama-cpp-python==0.1.76 pip install llama-cpp-python==0.1.83

I have also tried with:
conda install -c conda-forge llama-cpp-python

All commands give errors like this:

Collecting llama-cpp-python Using cached llama_cpp_python-0.2.10.tar.gz (3.6 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: typing-extensions>=4.5.0 in c:\users\pc\anaconda3\lib\site-packages (from llama-cpp-python) (4.7.1) Requirement already satisfied: numpy>=1.20.0 in c:\users\pc\anaconda3\lib\site-packages (from llama-cpp-python) (1.24.3) Collecting diskcache>=5.6.1 (from llama-cpp-python) Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Using cached diskcache-5.6.3-py3-none-any.whl (45 kB) Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [20 lines of output] scikit-build-core 0.5.1 using CMake 3.27.5 (wheel) Configuring CMake... 2023-09-30 20:15:28,116 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None loading initial cache file C:\Users\pc\AppData\Local\Temp\tmpafhcttti\build\CMakeInit.txt -- Building for: NMake Makefiles CMake Error at CMakeLists.txt:3 (project): Running

     'nmake' '-?'

    failed with:

     El sistema no puede encontrar el archivo especificado

  CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
  CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
  -- Configuring incomplete, errors occurred!

  *** CMake configuration failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

juliomap commented 1 year ago

load_models.py at line 58: return LlamaCpp(**kwargs) returns an exception

**kwargs seems correct, with all of its parameters with acceptable values.

I have tried this both on Ubuntu 22.04 and in the docker installation, and in both cases the error is the same. (exception results in null model pipe returned)

And with two different GGUF models models--TheBloke--Llama-2-13b-Chat-GGUF models--TheBloke--Llama-2-7b-Chat-GGUF

Maybe GGUF v2 is not still supported?

juliomap commented 1 year ago

In my case, I have solved it by installing export CMAKE_ARGS="-DLLAMA_CUBLAS=on -DCMAKE_CUDA_ARCHITECTURES=native" export FORCE_CMAKE=1 export PATH=$PATH:/usr/local/cuda/bin pip install llama-cpp-python

Latest version (of llama-cpp-python)

SciTechEnthusiast commented 1 year ago

after llama-cpp-python to requirements.txt it worked for me

VSCBSt commented 1 year ago

Any help on that? Nothing from the above has worked for me.

jujufalexander commented 1 year ago

Running 'pip install llama-cpp-python' solved the exact same errors for me. You could also just add 'lama-cpp-python' to your requirements.txt and run it as suggested above by @SciTechEnthusiast

dhanesh12twice commented 12 months ago

Nothing worked for me after installing llama-cpp-python also

DouglasFontans commented 11 months ago

@dhanesh12twice , Try to download this model file: https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin into the models folder. It worked for me.

https://github.com/imartinez/privateGPT/issues/461

sgf0156 commented 11 months ago

pip install pydantic==1.9.0

hugangyi110 commented 11 months ago

pip install pydantic<2 可以解决这个问题

Ath3neNoctua commented 11 months ago

Same error tried all of the above still no joy

fvaneijk commented 10 months ago

If running on windows the following helped. I'm using a RTX 3090. Note that on windows by default llama-cpp-python is built only for CPU to build it for GPU acceleration I used the following in a VSCODE terminal. Note this does require that the NVIDIA CUDA libraries are installed as well as the Visual Studio C++ compiler. I have the C++ compiler from VS 2019 installed. For Linux following the instructions in the readme.md i.e. Environment Setup worked.

pip uninstall -y llama-cpp-python $env:CMAKE_ARGS="-DLLAMA_CUBLAS=on" $env:FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --no-cache-dir --verbose

PlinyTheMid commented 9 months ago

This should fix the problem.

pip install llama-cpp-python

Sooraj-kj commented 8 months ago

Any help on that? Nothing from the above has worked for me.

did you got the answer