PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.08k stars 2.24k forks source link

last snapshot with default configuration fails with pydantic.main.BaseModel none is not an allowed value #524

Open dportabella opened 1 year ago

dportabella commented 1 year ago

last snapshot with default configuration, it fails with

  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
  none is not an allowed value (type=type_error.none.not_allowed)

Full log:

$ time python ingest.py           # learn all docs from SOURCE_DOCUMENTS/
2023-09-24 22:10:25,028 - INFO - ingest.py:121 - Loading documents from /home/david/localGPT/SOURCE_DOCUMENTS
2023-09-24 22:10:25,034 - INFO - ingest.py:34 - Loading document batch
2023-09-24 22:10:26,058 - INFO - ingest.py:130 - Loaded 1 documents from /home/david/localGPT/SOURCE_DOCUMENTS
2023-09-24 22:10:26,058 - INFO - ingest.py:131 - Split into 195 chunks of text
2023-09-24 22:10:26,450 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
2023-09-24 22:10:26,482 - INFO - instantiator.py:21 - Created a temporary directory at /tmp/tmptc5mr2im
2023-09-24 22:10:26,482 - INFO - instantiator.py:76 - Writing /tmp/tmptc5mr2im/_remote_module_non_scriptable.py
max_seq_length  512

real    0m9,784s
user    0m11,182s
sys     0m3,129s

$ time python run_localGPT.py
2023-09-24 22:10:43,504 - INFO - run_localGPT.py:221 - Running on: cuda
2023-09-24 22:10:43,504 - INFO - run_localGPT.py:222 - Display Source Documents set to: False
2023-09-24 22:10:43,504 - INFO - run_localGPT.py:223 - Use history set to: False
2023-09-24 22:10:43,680 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length  512
2023-09-24 22:10:45,221 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2023-09-24 22:10:45,253 - INFO - run_localGPT.py:56 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cuda
2023-09-24 22:10:45,253 - INFO - run_localGPT.py:57 - This action can take a few minutes!
2023-09-24 22:10:45,253 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
llama.cpp: loading model from ./models/models--TheBloke--Llama-2-7b-Chat-GGUF/snapshots/ad37d4910ba009a69bb41de44942056d635214ab/llama-2-7b-chat.Q4_K_M.gguf
error loading model: unknown (magic, version) combination: 46554747, 00000002; is this really a GGML file?
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
  File "/home/david/localGPT/run_localGPT.py", line 258, in <module>
    main()
  File "/home/david/anaconda3/envs/localGPT/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/david/anaconda3/envs/localGPT/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/home/david/anaconda3/envs/localGPT/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/david/anaconda3/envs/localGPT/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/david/localGPT/run_localGPT.py", line 229, in main
    qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type="llama")
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/david/localGPT/run_localGPT.py", line 144, in retrieval_qa_pipline
    qa = RetrievalQA.from_chain_type(
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/david/anaconda3/envs/localGPT/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 100, in from_chain_type
    combine_documents_chain = load_qa_chain(
                              ^^^^^^^^^^^^^^
  File "/home/david/anaconda3/envs/localGPT/lib/python3.11/site-packages/langchain/chains/question_answering/__init__.py", line 249, in load_qa_chain
    return loader_mapping[chain_type](
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/david/anaconda3/envs/localGPT/lib/python3.11/site-packages/langchain/chains/question_answering/__init__.py", line 73, in _load_stuff_chain
    llm_chain = LLMChain(
                ^^^^^^^^^
  File "/home/david/anaconda3/envs/localGPT/lib/python3.11/site-packages/langchain/load/serializable.py", line 74, in __init__
    super().__init__(**kwargs)
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
  none is not an allowed value (type=type_error.none.not_allowed)

real    0m5,021s
user    0m5,932s
sys     0m2,981s
keskhanal commented 1 year ago

i faced the similar issue on Ubuntu 22.04.3 LTS

ionescofung commented 1 year ago

same issue

MohamedYahia3128 commented 1 year ago

this problem occure because llama-cpp-python is missing you can simply use pip install llama-cpp-python==0.1.83