PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.03k stars 2.23k forks source link

FileNotFound on Debian Server when running app.py (Unable to download, find the model) #452

Open schoemantian opened 1 year ago

schoemantian commented 1 year ago

Traceback (most recent call last): File ".../localGPT-main/app.py", line 143, in main() File "/root/anaconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1157, in call return self.main(args, kwargs) File "/root/anaconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/root/anaconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "/root/anaconda3/envs/localGPT/lib/python3.10/site-packages/click/core.py", line 783, in invoke return __callback(args, **kwargs) File ".../localGPT-main/app.py", line 113, in main llm = load_model(device_type, model_id=model_id, File ".../localGPT-main/app.py", line 26, in load_model model = AutoGPTQForCausalLM.from_quantized( File "/root/anaconda3/envs/localGPT/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 108, in from_quantized return quant_func( File "/root/anaconda3/envs/localGPT/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 791, in from_quantized raise FileNotFoundError(f"Could not find model in {model_name_or_path}") FileNotFoundError: Could not find model in TheBloke/WizardLM-7B-uncensored-GPTQ 2023-09-04 10:56:27,607 - INFO - duckdb.py:414 - Persisting DB to disk, putting it in the save folder: .../localGPT-main/DB

schoemantian commented 1 year ago

Same on Windows, @PromtEngineer something in your constants and main app is not working well at all.

The default Llam2 7B chat is loading but then no matter what never has memory to run, and then when changing the models it can't find the models.

It's a vicsious loop of broken app:

(churnGPT) PS C:\development\churnGPT> python app.py
2023-09-11 15:50:40,338 - INFO - app.py:180 - Running on: cuda
2023-09-11 15:50:40,338 - INFO - app.py:181 - Display Source Documents set to: False
2023-09-11 15:50:40,777 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length  512
2023-09-11 15:50:44,422 - INFO - posthog.py:16 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2023-09-11 15:50:44,563 - INFO - app.py:45 - Loading Model: TheBloke/stable-vicuna-13B-GPTQ, on: cuda
2023-09-11 15:50:44,563 - INFO - app.py:46 - This action can take a few minutes!
2023-09-11 15:50:44,563 - INFO - app.py:68 - Using AutoGPTQForCausalLM for quantized models
Downloading (…)okenizer_config.json: 100%|████████████████████████████████████████████████████████████████████| 699/699 [00:00<?, ?B/s]
Downloading tokenizer.model: 100%|██████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 4.00MB/s]
Downloading (…)/main/tokenizer.json: 100%|████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 19.7MB/s]
Downloading (…)in/added_tokens.json: 100%|██████████████████████████████████████████████████████████████████| 21.0/21.0 [00:00<?, ?B/s]
Downloading (…)cial_tokens_map.json: 100%|████████████████████████████████████████████████████████████████████| 410/410 [00:00<?, ?B/s]
2023-09-11 15:50:46,064 - INFO - app.py:75 - Tokenizer loaded
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████████████████████████████| 769/769 [00:00<?, ?B/s]
Downloading (…)quantize_config.json: 100%|████████████████████████████████████████████████████████████████████| 116/116 [00:00<?, ?B/s]
Traceback (most recent call last):
  File "C:\development\churnGPT\app.py", line 249, in <module>
    main()
  File "C:\Users\tian\anaconda3\envs\churnGPT\lib\site-packages\click\core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "C:\Users\tian\anaconda3\envs\churnGPT\lib\site-packages\click\core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "C:\Users\tian\anaconda3\envs\churnGPT\lib\site-packages\click\core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\tian\anaconda3\envs\churnGPT\lib\site-packages\click\core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "C:\development\churnGPT\app.py", line 209, in main
    llm = load_model(device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME)
  File "C:\development\churnGPT\app.py", line 77, in load_model
    model = AutoGPTQForCausalLM.from_quantized(
  File "C:\Users\tian\anaconda3\envs\churnGPT\lib\site-packages\auto_gptq\modeling\auto.py", line 82, in from_quantized
    return quant_func(
  File "C:\Users\tian\anaconda3\envs\churnGPT\lib\site-packages\auto_gptq\modeling\_base.py", line 698, in from_quantized
    raise FileNotFoundError(f"Could not find model in {model_name_or_path}")
FileNotFoundError: Could not find model in TheBloke/stable-vicuna-13B-GPTQ
AnandMoorthy commented 1 year ago

It happen for me as well, I was used wrong MODEL_BASENAME. Make sure you are using the right one. Refer Files and Version tab in huggingface. As per your model name your MODEL_BASENAME should be model.safetensors.