Cannot load AWQ or GPTQ models, GUF model and non-quantized models work ok
From a fresh install
I've installed AWQ and GPTQ with the "pip install autoawq" (auto-gptq) command but it still tells me they need to be installed in the error.
I've done so with a whole host of messages saying: "Requirement already satisfied:"
I've searched for other issues on this topic but those solutions called for installing the autoawq and gptq libraries.
I get similar errors with the auto-gptq but here I've only included the autoawq errors.
I've tried every model loader and I get different errors... I read that I should use Transformers and that also fails.
Is there an existing issue for this?
[X] I have searched the existing issues
Reproduction
10:37:07-369247 ERROR Failed to load the model.
Traceback (most recent call last):
File "K:\oobabooga\modules\ui_model_menu.py", line 232, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\oobabooga\modules\models.py", line 93, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\oobabooga\modules\models.py", line 172, in huggingface_loader
model = LoaderClass.from_pretrained(path_to_model, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\modeling_utils.py", line 3657, in from_pretrained
hf_quantizer.validate_environment(
File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\quantizers\quantizer_awq.py", line 50, in validate_environment
raise ImportError("Loading an AWQ quantized model requires auto-awq library (pip install autoawq)")
ImportError: Loading an AWQ quantized model requires auto-awq library (pip install autoawq)
Screenshot
Here is a listing of the settings and error
Logs
10:37:07-369247 ERROR Failed to load the model.
Traceback (most recent call last):
File "K:\oobabooga\modules\ui_model_menu.py", line 232, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\oobabooga\modules\models.py", line 93, in load_model
output = load_func_map[loader](model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\oobabooga\modules\models.py", line 172, in huggingface_loader
model = LoaderClass.from_pretrained(path_to_model, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\modeling_utils.py", line 3657, in from_pretrained
hf_quantizer.validate_environment(
File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\quantizers\quantizer_awq.py", line 50, in validate_environment
raise ImportError("Loading an AWQ quantized model requires auto-awq library (`pip install autoawq`)")
ImportError: Loading an AWQ quantized model requires auto-awq library (`pip install autoawq`)
![image_2024-11-25_103852305](https://github.com/user-attachments/assets/f80d0ac1-b8b5-40e8-9d48-13ba840f207a)
System Info
OS Windows 11
Processor 12th Gen Intel(R) Core(TM) i7-12700KF 3.60 GHz
Installed RAM 32.0 GB (31.8 GB usable)
System type 64-bit operating system, x64-based processor
GPU ZOTAC - Nvidia RTX 3060 12G
Describe the bug
Cannot load AWQ or GPTQ models, GUF model and non-quantized models work ok From a fresh install I've installed AWQ and GPTQ with the "pip install autoawq" (auto-gptq) command but it still tells me they need to be installed in the error. I've done so with a whole host of messages saying: "Requirement already satisfied:"
I've searched for other issues on this topic but those solutions called for installing the autoawq and gptq libraries.
I get similar errors with the auto-gptq but here I've only included the autoawq errors.
I've tried every model loader and I get different errors... I read that I should use Transformers and that also fails.
Is there an existing issue for this?
Reproduction
10:37:07-369247 ERROR Failed to load the model. Traceback (most recent call last): File "K:\oobabooga\modules\ui_model_menu.py", line 232, in load_model_wrapper shared.model, shared.tokenizer = load_model(selected_model, loader) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\oobabooga\modules\models.py", line 93, in load_model output = load_func_maploader ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\oobabooga\modules\models.py", line 172, in huggingface_loader model = LoaderClass.from_pretrained(path_to_model, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\modeling_utils.py", line 3657, in from_pretrained hf_quantizer.validate_environment( File "K:\oobabooga\installer_files\env\Lib\site-packages\transformers\quantizers\quantizer_awq.py", line 50, in validate_environment raise ImportError("Loading an AWQ quantized model requires auto-awq library (
pip install autoawq
)") ImportError: Loading an AWQ quantized model requires auto-awq library (pip install autoawq
)Screenshot
Here is a listing of the settings and error
Logs
System Info