PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
19.77k stars 2.2k forks source link

Issue for loading the quantized model #406

Open Pradeep987654321 opened 1 year ago

Pradeep987654321 commented 1 year ago

For loading this model MODEL_ID = "TheBloke/WizardLM-7B-uncensored-GPTQ" MODEL_BASENAME = "WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors"

facing this issue, before its worked fine but now its showing this error @PromtEngineer can you help me pls. Downloading tokenizer.model: 100%|██████████████████████████████████████████████████| 500k/500k [00:00<00:00, 5.75MB/s] Downloading (…)/main/tokenizer.json: 100%|████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 2.23MB/s] Downloading (…)in/added_tokens.json: 100%|██████████████████████████████████████████| 21.0/21.0 [00:00<00:00, 10.5kB/s] Downloading (…)cial_tokens_map.json: 100%|██████████████████████████████████████████| 96.0/96.0 [00:00<00:00, 96.4kB/s] 2023-08-24 11:03:02,444 - INFO - run_localgpt.py:75 - Tokenizer loaded Downloading (…)lve/main/config.json: 100%|█████████████████████████████████████████████| 708/708 [00:00<00:00, 354kB/s] Downloading (…)quantize_config.json: 100%|██████████████████████████████████████████| 92.0/92.0 [00:00<00:00, 91.9kB/s] Traceback (most recent call last): File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\localGPT\run_localgpt.py", line 246, in main() File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\venv\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\venv\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\venv\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\venv\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, **kwargs) File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\localGPT\run_localgpt.py", line 209, in main llm = load_model(device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME) File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\localGPT\run_localgpt.py", line 77, in load_model model = AutoGPTQForCausalLM.from_quantized( File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\venv\lib\site-packages\auto_gptq\modeling\auto.py", line 82, in from_quantized return quant_func( File "C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\venv\lib\site-packages\auto_gptq\modeling_base.py", line 698, in from_quantized raise FileNotFoundError(f"Could not find model in {model_name_or_path}") FileNotFoundError: Could not find model in TheBloke/WizardLM-7B-uncensored-GPTQ 2023-08-24 11:03:04,268 - INFO - duckdb.py:414 - Persisting DB to disk, putting it in the save folder: C:\Users\visionaries\AppData\Local\Programs\Python\Python310\local_llama\localGPT/DB

cyrillzadra commented 1 year ago

MODEL_BASENAME is model.safetensors.

Pradeep987654321 commented 1 year ago

Thanks @cyrillzadra