oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
38.66k stars 5.1k forks source link

Load Model Problem #1800

Closed one-pip closed 11 months ago

one-pip commented 1 year ago

Describe the bug

Traceback (most recent call last): File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui[server.py](http://server.py/)”, line 102, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\modules[models.py](http://models.py/)”, line 158, in load_model model = load_quantized(model_name) File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\modules\GPTQ_loader.py”, line 147, in load_quantized exit() File “D:\NEW_OOBA\oobabooga\oobabooga_windows\installer_files\env\lib_sitebuiltins.py”, line 26, in call raise SystemExit(code) SystemExit: None

Is there an existing issue for this?

Reproduction

Traceback (most recent call last): File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui[server.py](http://server.py/)”, line 102, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\modules[models.py](http://models.py/)”, line 158, in load_model model = load_quantized(model_name) File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\modules\GPTQ_loader.py”, line 147, in load_quantized exit() File “D:\NEW_OOBA\oobabooga\oobabooga_windows\installer_files\env\lib_sitebuiltins.py”, line 26, in call raise SystemExit(code) SystemExit: None

Screenshot

No response

Logs

Can't determine model type from model name. Please specify it manually using --model_type argument
Loading thu-coai_LongLM-large...
Loading thu-coai_LongLM-large...
Can't determine model type from model name. Please specify it manually using --model_type argument

System Info

WIN 11,ASUS 3060 12G,
LaaZa commented 1 year ago

You may be able to load it as a normal model but I'm not sure.

Set --wbits to none(remove the option or in the webui set to none) so it does not try to load it as quantized GPTQ model which it is not.

riaanlab1234 commented 1 year ago

the --wbits to none works like a charm thanks LaaZa

matichek commented 1 year ago

you mean to 0?

github-actions[bot] commented 11 months ago

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

ReekElderblood commented 8 months ago

i am getting this same error when i use this Modle "TheBloke_Yarn-Mistral-7B-128k-AWQ"

ijrmarinho commented 5 months ago

Getting the same error as well:

ui_model_menu.py", line 213, in load_model_wrapper

shared.model, shared.tokenizer = load_model(selected_model, loader)

                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

                     Trying to load TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ using ExLlamav2_HF
trey-greenn commented 5 months ago

Getting the same error as well:

ui_model_menu.py", line 213, in load_model_wrapper

shared.model, shared.tokenizer = load_model(selected_model, loader)

                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

                     Trying to load TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ using ExLlamav2_HF

Hey any luck on using this model, i am getting the same error!