xNul / chat-llama-discord-bot

A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, MPT, or any other Large Language Model (LLM) supported by text-generation-webui or llama.cpp.
https://discord.gg/TcRGDV754Y
MIT License
118 stars 23 forks source link

Startup errors #12

Closed SvetZitrka closed 1 year ago

SvetZitrka commented 1 year ago

I'm trying to run your script correctly, but it's not working. I am not a developer. When I run the script, it always missed some python library, I installed that one, but the current error is

PS E:\AI\oobabooga\text-generation-webui> python bot.py --model vicuna-13b-GPTQ-4bit-128g Loading vicuna-13b-GPTQ-4bit-128g... CUDA extension not installed. Found the following quantized model: models\vicuna-13b-GPTQ-4bit-128g\vicuna-13b-4bit-128g.safetensors Loading model ... Done. Traceback (most recent call last): File "E:\AI\oobabooga\text-generation-webui\bot.py", line 226, in <module> shared.model, shared.tokenizer = load_model(shared.model_name) File "E:\AI\oobabooga\text-generation-webui\modules\models.py", line 127, in load_model model = load_quantized(model_name) File "E:\AI\oobabooga\text-generation-webui\modules\GPTQ_loader.py", line 193, in load_quantized model = model.to(torch.device('cuda:0')) File "C:\Users\mach4\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 1896, in to return super().to(*args, **kwargs) File "C:\Users\mach4\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "C:\Users\mach4\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "C:\Users\mach4\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "C:\Users\mach4\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "C:\Users\mach4\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "C:\Users\mach4\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

It seems like the script doesn't know where to look for all the files. That could be a problem with "Path". I run everything on Winwows

The classic "oobabooga" works without a problem

Starting the web UI... "E:\AI\oobabooga\" Gradio HTTP request redirected to localhost :) bin E:\AI\oobabooga\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll Loading vicuna-13b-GPTQ-4bit-128g... Found the following quantized model: models\vicuna-13b-GPTQ-4bit-128g\vicuna-13b-4bit-128g.safetensors Loading model ... Done. Loaded the model in 4.98 seconds. Loading the extension "sd_api_pictures"... Ok. Loading the extension "gallery"... Ok. Running on local URL: http://127.0.0.1:7860 To create a public link, setshare=Trueinlaunch().

SvetZitrka commented 1 year ago

But if I run the script through a regular .bat file, just replacing "server.py" with "bot.py"

Contents of the BAT file: @echo off @echo Starting the web UI... cd /D "%~dp0" @echo "%~dp0" set MAMBA_ROOT_PREFIX=%cd%\installer_files\conda set INSTALL_ENV_DIR=%cd%\installer_files\env if not exist "%MAMBA_ROOT_PREFIX%\condabin\conda.bat" ( call "%MAMBA_ROOT_PREFIX%\_conda.exe" shell hook >nul 2>&1 ) call "%MAMBA_ROOT_PREFIX%\condabin\conda.bat" activate "%INSTALL_ENV_DIR%" || ( echo MicroMamba hook not found. && goto end ) cd text-generation-webui call python bot.py --auto-devices --chat --listen-port 7860 --model vicuna-13b-GPTQ-4bit-128g --extension sd_api_pictures --auto-launch :end pause

The discord module is missing for a change:

Starting the web UI... "E:\AI\oobabooga\" Traceback (most recent call last): File "E:\AI\oobabooga\text-generation-webui\bot.py", line 11, in <module> import discord ModuleNotFoundError: No module named 'discord' Press any key to continue . . .

How do I install the Discord module, to the automatic installation of "oobabooga"

Garry-Marshall commented 1 year ago

Start the 'micromamba-cmd.bat' from the folder where you installed the Oobabooga project, and on the command shell that opens enter 'pip install discord' That should install the Discord package into your virtual environment.

SvetZitrka commented 1 year ago

super