PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
19.65k stars 2.2k forks source link

Error in running this run_localGPT.py #688

Open VISWANATH78 opened 7 months ago

VISWANATH78 commented 7 months ago

I am getting issue like , (localGPT) viswanath:~/localGPT$ python run_localGPT.py --device_type cuda /home/miniconda3/envs/localGPT/lib/python3.10/site-packages/torch/cuda/init.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 2023-12-17 01:52:31,506 - INFO - run_localGPT.py:221 - Running on: cuda 2023-12-17 01:52:31,506 - INFO - run_localGPT.py:222 - Display Source Documents set to: False 2023-12-17 01:52:31,506 - INFO - run_localGPT.py:223 - Use history set to: False 2023-12-17 01:52:31,834 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer max_seq_length 512 2023-12-17 01:52:32,947 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information. 2023-12-17 01:52:33,043 - INFO - run_localGPT.py:56 - Loading Model: TheBloke/Llama-2-70b-Chat-GGUF, on: cuda 2023-12-17 01:52:33,043 - INFO - run_localGPT.py:57 - This action can take a few minutes! 2023-12-17 01:52:33,043 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models Traceback (most recent call last): File "/nlsasfs/home/localGPT/run_localGPT.py", line 258, in main() File "/nlsasfs/home/miniconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1157, in call return self.main(args, kwargs) File "/nlsasfs/home/miniconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/nlsasfs/home/miniconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "/nlsasfs/home/mcq/viswanath/miniconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 783, in invoke return __callback(args, kwargs) File "/nlsasfs/home/mcq/viswanath/localGPT/run_localGPT.py", line 229, in main qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type="llama") File "/nlsasfs/home/mcq/viswanath/localGPT/run_localGPT.py", line 144, in retrieval_qa_pipline qa = RetrievalQA.from_chain_type( File "/nlsasfs/home/mcq/viswanath/miniconda3/envs/localGPT/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 100, in from_chain_type combine_documents_chain = load_qa_chain( File "/nlsasfs/home/mcq/viswanath/miniconda3/envs/localGPT/lib/python3.10/site-packages/langchain/chains/question_answering/init.py", line 249, in load_qa_chain return loader_mapping[chain_type]( File "/nlsasfs/home/mcq/viswanath/miniconda3/envs/localGPT/lib/python3.10/site-packages/langchain/chains/question_answering/init.py", line 73, in _load_stuff_chain llm_chain = LLMChain( File "/nlsasfs/home/mcq/viswanath/miniconda3/envs/localGPT/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in init super().init(kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain llm none is not an allowed value (type=type_error.none.not_allowed)

How to fix this issue help me out .

mailmehao commented 7 months ago

load INSTRUCTOR_Transformer max_seq_length 512 2023-12-19 09:09:29,467 - INFO - run_localGPT.py:59 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cpu 2023-12-19 09:09:29,468 - INFO - run_localGPT.py:60 - This action can take a few minutes! 2023-12-19 09:09:29,468 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models Traceback (most recent call last): File "/home/jianghao/localGPT/run_localGPT.py", line 282, in main() File "/home/jianghao/anaconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1157, in call return self.main(args, kwargs) File "/home/jianghao/anaconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/home/jianghao/anaconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "/home/jianghao/anaconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 783, in invoke return __callback(args, kwargs) File "/home/jianghao/localGPT/run_localGPT.py", line 249, in main qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type=model_type) File "/home/jianghao/localGPT/run_localGPT.py", line 150, in retrieval_qa_pipline qa = RetrievalQA.from_chain_type( File "/home/jianghao/anaconda3/envs/localGPT/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 100, in from_chain_type combine_documents_chain = load_qa_chain( File "/home/jianghao/anaconda3/envs/localGPT/lib/python3.10/site-packages/langchain/chains/question_answering/init.py", line 249, in load_qa_chain return loader_mapping[chain_type]( File "/home/jianghao/anaconda3/envs/localGPT/lib/python3.10/site-packages/langchain/chains/question_answering/init.py", line 73, in _load_stuff_chain llm_chain = LLMChain( File "/home/jianghao/anaconda3/envs/localGPT/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in init super().init(kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain llm none is not an allowed value (type=type_error.none.not_allowed)

VISWANATH78 commented 7 months ago

Still having the same issue dude. anyway that it can be fixed?

PromtEngineer commented 7 months ago

Working on updating all the versions of the dependencies.

VISWANATH78 commented 7 months ago

please update it in master branch @PromtEngineer and do notify us . It will be helpful. thank you . I think we dont need to change the code of anything in the run_localGPT.py if there is dependencies issue. Please provide us the requirements.txt file updated text alone . so that we install that package alone and we start contributing . Thank you