PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.01k stars 2.23k forks source link

error on run_localGPT.py: none is not an allowed value (type=type_error.none.not_allowed) #584

Open aasem-research-work opened 1 year ago

aasem-research-work commented 1 year ago

Hi

I tried to execute the following: (localGPT) E:\dev\python\localGPT>python run_localGPT.py It begins correctly, downloaded required files but ends up with error:

(localGPT) E:\dev\python\localGPT>python run_localGPT.py 2023-10-13 02:04:31,251 - INFO - run_localGPT.py:229 - Running on: cpu 2023-10-13 02:04:31,251 - INFO - run_localGPT.py:230 - Display Source Documents set to: False 2023-10-13 02:04:31,251 - INFO - run_localGPT.py:231 - Use history set to: False 2023-10-13 02:04:31,736 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer max_seq_length 512 2023-10-13 02:04:36,411 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information. 2023-10-13 02:04:36,558 - INFO - run_localGPT.py:56 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu 2023-10-13 02:04:36,558 - INFO - run_localGPT.py:57 - This action can take a few minutes! 2023-10-13 02:04:36,558 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models Traceback (most recent call last): File "E:\dev\python\localGPT\run_localGPT.py", line 266, in main() : :

File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain llm none is not an allowed value (type=type_error.none.not_allowed)

Would anyone please advise on what is I am missing.

Pranav-Kimbodo commented 1 year ago

I faced the similar issue but having no solution : (

tonys010968 commented 1 year ago

I'm also getting this error and I think it has to do with the PromptType. I don't think it expects a string and the code is trying to send it a string. I am still looking at it though.

eslam-gomaa-careem commented 1 year ago

same

eslam-gomaa-careem commented 1 year ago

try this

pip3.10 install llama-cpp-python

It worked with me

https://github.com/PromtEngineer/localGPT/issues/525#issuecomment-1745766968

aasem-research-work commented 1 year ago

I tried pip install llama-cpp-python

but mine is not fixed

tonys010968 commented 1 year ago

try this

pip3.10 install llama-cpp-python

It worked with me

#525 (comment)

What version of llama-cpp-python are you running? I am running 0.1.83 and I am still getting the error.

tonys010968 commented 1 year ago

try this

pip3.10 install llama-cpp-python

It worked with me #525 (comment)

What version of llama-cpp-python are you running? I am running 0.1.83 and I am still getting the error.

And I just downgraded to version 0.1.48 because I saw when PromtEngineer put out a video on localGPT. I figured I would downgrade to the version that was current around when that video came out. Still not working though.

aasem-research-work commented 1 year ago

I have tried the followings but got no luck:

Upon debugging, it observe.. the load_model is returning None. It might be due to the incorrect path to the llama-2-7b-chat.Q4_K_M.gguf file or its directory.

    # load the llm pipeline
    llm = load_model(device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME, LOGGING=logging)
orchardc commented 1 year ago

Same problem here. Is there a solution for Windows users?

tonys010968 commented 1 year ago

Not so far. I think that aasem's post above is on track. The models path does seem to be off, but I am not sure how to fix it or what the correct path should be.

The path I see is as follows:

'./models\models--TheBloke--Llama-2-7b-Chat-GGUF\snapshots\191239b3e26b2882fb562ffccdd1cf0f65402adb\llama-2-7b-chat.Q4_K_M.gguf'

See how it uses a / and then the double blackslash (\) in the path. I feel like that is wrong, but I am not sure how to fix it.

ShishirMaidenCodeLife commented 1 year ago

Try this : pip install llama-cpp-python==0.1.83

Worked for me.... hope works for you too !!

orchardc commented 1 year ago

Thanks @ShishirMaidenCodeLife , it works for me with cpu

ramaseshireddykolli commented 1 year ago

@ShishirMaidenCodeLife thanks now None error is gone but i am not getting any response from the entered query. it goes in to unresponsive mode. I had to kill script using ctrl+z.

Steps followed (ivrGPT-env) root@Audio-Temp:/home/ubuntu/ram/gpt/localGPT# python ingest.py --device_type cpu Requirement already satisfied: pysqlite3-binary in /home/ubuntu/ram/gpt/ivrGPT-env/lib/python3.10/site-packages (0.5.2.post1)

[notice] A new release of pip is available: 23.0.1 -> 23.2.1 [notice] To update, run: pip install --upgrade pip 2023-10-17 05:40:29,142 - INFO - ingest.py:121 - Loading documents from /home/ubuntu/ram/gpt/localGPT/SOURCE_DOCUMENTS 2023-10-17 05:40:29,151 - INFO - ingest.py:34 - Loading document batch 2023-10-17 05:40:29,195 - INFO - ingest.py:130 - Loaded 1 documents from /home/ubuntu/ram/gpt/localGPT/SOURCE_DOCUMENTS 2023-10-17 05:40:29,195 - INFO - ingest.py:131 - Split into 4 chunks of text 2023-10-17 05:40:31,424 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer max_seq_length 512 (ivrGPT-env) root@Audio-Temp:/home/ubuntu/ram/gpt/localGPT# python run_localGPT.py --device_type cpu Requirement already satisfied: pysqlite3-binary in /home/ubuntu/ram/gpt/ivrGPT-env/lib/python3.10/site-packages (0.5.2.post1)

[notice] A new release of pip is available: 23.0.1 -> 23.2.1 [notice] To update, run: pip install --upgrade pip 2023-10-17 05:41:36,794 - INFO - run_localGPT.py:231 - Running on: cpu 2023-10-17 05:41:36,794 - INFO - run_localGPT.py:232 - Display Source Documents set to: False 2023-10-17 05:41:36,794 - INFO - run_localGPT.py:233 - Use history set to: False 2023-10-17 05:41:37,182 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer max_seq_length 512 2023-10-17 05:41:49,462 - INFO - run_localGPT.py:57 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu 2023-10-17 05:41:49,462 - INFO - run_localGPT.py:58 - This action can take a few minutes! 2023-10-17 05:41:49,462 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models Downloading (…)-7b-chat.Q4_K_M.gguf: 100%|███████████████████████████████████████████████████████████████████████████████████| 4.08G/4.08G [00:27<00:00, 148MB/s] llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from ./models/models--TheBloke--Llama-2-7b-Chat-GGUF/snapshots/191239b3e26b2882fb562ffccdd1cf0f65402adb/llama-2-7b-chat.Q4_K_M.gguf (version GGUF V2 (latest))

.................................................................................................. llama_new_context_with_model: kv self size = 2048.00 MB llama_new_context_with_model: compute buffer total size = 281.47 MB AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |

Enter a query: what are the terms and conditions of laptop? [1]+ Stopped python run_localGPT.py --device_type cpu (ivrGPT-env) root@Audio-Temp:/home/ubuntu/ram/gpt/localGPT#

Can you some one please what i am doing wrong? i am using all default settings.

tonys010968 commented 11 months ago

I did manage to get this working. I can't remember all of the changes I made, but I exported my conda environment to YAML. Here it is. I hope this helps someone.

name: localGPT channels: