xNul / chat-llama-discord-bot

A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, MPT, or any other Large Language Model (LLM) supported by text-generation-webui or llama.cpp.
https://discord.gg/TcRGDV754Y
MIT License
118 stars 23 forks source link

TypeError: 'NoneType' object is not callable when running through the .bat file #20

Open Jake36921 opened 1 year ago

Jake36921 commented 1 year ago

he following flags have been taken from the environment variable 'OOBABOOGA_FLAGS': --fkdlsja >nul 2>&1 & python bot.py --token --chat --model-menu To use the CMD_FLAGS Inside webui.py, unset 'OOBABOOGA_FLAGS'.

bin E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " function 'cadam32bit_grad_fp32' not found The following models are available:

  1. ggml-alpaca-7b-q4.bin
  2. ggml-alpaca-13b-x-gpt-4-q4_0.bin
  3. Meth-ggmlv3-q4_0.bin
  4. OPT-13B-Erebus-4bit-128g.safetensors
  5. PMC_LLAMA-7B.ggmlv3.q5_0.bin
  6. pygmalion-7b-q5_1-ggml-v5.bin

Which one do you want to load? 1-6

2

INFO:Loading ggml-alpaca-13b-x-gpt-4-q4_0.bin... INFO:llama.cpp weights detected: models\ggml-alpaca-13b-x-gpt-4-q4_0.bin

INFO:Cache capacity is 0 bytes llama.cpp: loading model from models\ggml-alpaca-13b-x-gpt-4-q4_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32001 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 5120 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 40 llama_model_load_internal: n_layer = 40 llama_model_load_internal: n_rot = 128 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 13824 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 13B llama_model_load_internal: ggml ctx size = 0.09 MB llama_model_load_internal: mem required = 9031.71 MB (+ 1608.00 MB per state) . llama_init_from_file: kv self size = 1600.00 MB AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | INFO:Loaded the model in 45.33 seconds.

INFO:Loading the extension "gallery"... [2023-06-13 19:37:45] [INFO ] discord.client: logging in using static token Traceback (most recent call last): File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\discord\http.py", line 803, in static_login data = await self.request(Route('GET', '/users/@me')) File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\discord\http.py", line 745, in request raise HTTPException(response, data) discord.errors.HTTPException: 401 Unauthorized (error code: 0): 401: Unauthorized

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\bot.py", line 544, in client.run(bot_args.token if bot_args.token else TOKEN) File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\discord\client.py", line 860, in run asyncio.run(runner()) File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\asyncio\base_events.py", line 649, in run_until_complete return future.result() File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\discord\client.py", line 849, in runner await self.start(token, reconnect=reconnect) File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\discord\client.py", line 777, in start await self.login(token) File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\discord\client.py", line 612, in login data = await self.http.static_login(token) File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\discord\http.py", line 807, in static_login raise LoginFailure('Improper token has been passed.') from exc discord.errors.LoginFailure: Improper token has been passed. Exception ignored in: <function LlamaCppModel.del at 0x000001CFE86E7AC0> Traceback (most recent call last): File "E:\etc\bot\ChatLLaMA\oobabooga_windows\text-generation-webui\modules\llamacpp_model.py", line 23, in del File "E:\etc\bot\ChatLLaMA\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama.py", line 1334, in del TypeError: 'NoneType' object is not callable Press any key to continue . . .

The script runs fine when using 'python bot.py' directly using the cmd_window.bat on oobabooga_windows.bat but whenever I run it through the bat file it would cause this error.