Closed one-pip closed 11 months ago
You may be able to load it as a normal model but I'm not sure.
Set --wbits
to none(remove the option or in the webui set to none) so it does not try to load it as quantized GPTQ model which it is not.
the --wbits to none works like a charm thanks LaaZa
you mean to 0?
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.
i am getting this same error when i use this Modle "TheBloke_Yarn-Mistral-7B-128k-AWQ"
Getting the same error as well:
ui_model_menu.py", line 213, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trying to load TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ using ExLlamav2_HF
Getting the same error as well:
ui_model_menu.py", line 213, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Trying to load TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ using ExLlamav2_HF
Hey any luck on using this model, i am getting the same error!
Describe the bug
Traceback (most recent call last): File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui[server.py](http://server.py/)”, line 102, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\modules[models.py](http://models.py/)”, line 158, in load_model model = load_quantized(model_name) File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\modules\GPTQ_loader.py”, line 147, in load_quantized exit() File “D:\NEW_OOBA\oobabooga\oobabooga_windows\installer_files\env\lib_sitebuiltins.py”, line 26, in call raise SystemExit(code) SystemExit: None
Is there an existing issue for this?
Reproduction
Traceback (most recent call last): File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui[server.py](http://server.py/)”, line 102, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\modules[models.py](http://models.py/)”, line 158, in load_model model = load_quantized(model_name) File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\modules\GPTQ_loader.py”, line 147, in load_quantized exit() File “D:\NEW_OOBA\oobabooga\oobabooga_windows\installer_files\env\lib_sitebuiltins.py”, line 26, in call raise SystemExit(code) SystemExit: None
Screenshot
No response
Logs
System Info