Open Pradeep987654321 opened 11 months ago
The Bloke has changed the filename as "model.safetensors". To get the localGPT running first go to the constants.py file inside the cloned localGPT folder. Then update the values as :
MODEL_ID="TheBloke/Llama-2-7b-Chat-GPTQ" MODEL_BASENAME = "model.safetensors"
.... You can change the name of model MODEL_ID of your choice. But keep the MODEL_BASENAME as model.safetensors.
File "C:\Users\XXXX\AppData\Local\Programs\Python\Python310\local_llama\venv\lib\site-packages\auto_gptq\modeling_base.py", line 698, in from_quantized raise FileNotFoundError(f"Could not find model in {model_name_or_path}") FileNotFoundError: Could not find model in TheBloke/WizardLM-7B-uncensored-GPTQ
Can anyone facing this issue while running this .