PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
19.66k stars 2.2k forks source link

GPTQ models are not loaded while running run_localgpt.py #407

Open Pradeep987654321 opened 11 months ago

Pradeep987654321 commented 11 months ago

File "C:\Users\XXXX\AppData\Local\Programs\Python\Python310\local_llama\venv\lib\site-packages\auto_gptq\modeling_base.py", line 698, in from_quantized raise FileNotFoundError(f"Could not find model in {model_name_or_path}") FileNotFoundError: Could not find model in TheBloke/WizardLM-7B-uncensored-GPTQ

Can anyone facing this issue while running this .

ShishirMaidenCodeLife commented 11 months ago

The Bloke has changed the filename as "model.safetensors". To get the localGPT running first go to the constants.py file inside the cloned localGPT folder. Then update the values as :

MODEL_ID="TheBloke/Llama-2-7b-Chat-GPTQ" MODEL_BASENAME = "model.safetensors"

.... You can change the name of model MODEL_ID of your choice. But keep the MODEL_BASENAME as model.safetensors.