oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
39.4k stars 5.18k forks source link

No module named ‘texttable’ #1609

Closed Ege-P closed 1 year ago

Ege-P commented 1 year ago

Describe the bug

I searched in issues and in GitHub pages but couldn't solve this issue. Trying to load a Llama GPTQ model. In parameters, selected: 4 Wbits, 128 groupsize and Llama model type

When I try to load the model, I get this error in the WebUI (also msg in cmd that says: triton not installed).

With the one-click installer, it downloads gptq-for-llama in the repository automatically. If I delete that file, I get another error: No module named ‘llama_inference_offload’

I tried to reinstall from its GitHub page instead of installer.bat but I got the same result.

Is there an existing issue for this?

Reproduction

Download model: https://huggingface.co/Aitrepreneur/wizardLM-7B-GPTQ-4bit-128g/tree/main Try loading in webUI.

Screenshot

No response

Logs

Traceback (most recent call last):
File “D:\ALLAI\AIChatUI\text-generation-webui\server.py”, line 102, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “D:\ALLAI\AIChatUI\text-generation-webui\modules\models.py”, line 156, in load_model
from modules.GPTQ_loader import load_quantized
File “D:\ALLAI\AIChatUI\text-generation-webui\modules\GPTQ_loader.py”, line 14, in
import llama_inference_offload
File “D:\ALLAI\AIChatUI\text-generation-webui\repositories\GPTQ-for-LLaMa\llama_inference_offload.py”, line 4, in
from gptq import GPTQ
File “D:\ALLAI\AIChatUI\text-generation-webui\repositories\GPTQ-for-LLaMa\gptq.py”, line 8, in
from texttable import Texttable
ModuleNotFoundError: No module named ‘texttable’

System Info

Winndows 10
gigabyte geforce rtx 4090
amd ryzen 9 7950x cpu
askmyteapot commented 1 year ago

Triton not found is normal for Windows. (Triton is not compatible with it at the moment, only Linux or WSL2) But based on that error, you have installed the Triton branch of GPTQ. You need to install either Ooba's fork of GPTQ or the CUDA branch of GPTQ from Qwop.

NOTE: Qwop's GPTQ defaults to Triton. I would recommend just using Ooba's for for the moment. https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md

Ege-P commented 1 year ago

Yes, this worked perfectly for all the problems from other models as well. I don't know why one-click installer didn't download the ooba branch. (might as well my mistake at some point). Anyway, thank you.

github-actions[bot] commented 1 year ago

This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below.