Closed keninishna closed 1 year ago
text-generation-webui has made a lot of commits changing things recently. I think you cloned the repository between one of these commits when it was broken. If you update text-generation-webui again, I've just updated the bot's code so it's compatible with this latest version of text-generation-webui
I just updated to the latest text-generation-webui commit and pulled bot.py again and no changes, still blocks until it disconnects then reconnects and sends the reply.
What model are you using and what is the command you are using to run the bot? Have you tried to see if this error happens while running webui? The problem could also be a webui library so you could try using the pip install -r requirements.txt --upgrade
command. I doubt this is an issue with the bot. If it's not a compatibility issue, it's probably an issue with webui.
You are right, the webui was super slow 0.03 tokens/s. I was able to get it back to over 2 tokens/s. I am using llamacpp models to run on cpu and I think its related to this issue. https://github.com/oobabooga/text-generation-webui/issues/866 To get it back up to speed I had to run pip install --force-reinstall --ignore-installed --no-cache-dir llama-cpp-python
So now the webui is working however the bot throws errors now and won't reply.
Traceback (most recent call last): File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/discord/app_commands/commands.py", line 842, in _do_call return await self._callback(interaction, params) # type: ignore File "/home/user/text-generation-webui/bot.py", line 207, in reply await llm_gen(ctx, queues) File "/home/user/text-generation-webui/bot.py", line 125, in llm_gen for resp in chatbot_wrapper(user_input): TypeError: chatbot_wrapper() got an unexpected keyword argument 'generate_state'
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/discord/ext/commands/hybrid.py", line 438, in _invoke_with_namespace value = await self._do_call(ctx, ctx.kwargs) # type: ignore File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/discord/app_commands/commands.py", line 856, in _do_call raise CommandInvokeError(self, e) from e discord.app_commands.errors.CommandInvokeError: Command 'reply' raised an exception: TypeError: chatbot_wrapper() got an unexpected keyword argument 'generate_state'
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/discord/ext/commands/hybrid.py", line 438, in _invoke_with_namespace value = await self._do_call(ctx, ctx.kwargs) # type: ignore File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/discord/app_commands/commands.py", line 856, in _do_call raise CommandInvokeError(self, e) from e discord.ext.commands.errors.HybridCommandError: Hybrid command raised an error: Command 'reply' raised an exception: TypeError: chatbot_wrapper() got an unexpected keyword argument 'generate_state'
I think I know the problem. ooba has broken the API again lol https://github.com/oobabooga/text-generation-webui/commit/0f212093a30367167bd9d1f5da8346e4432e5063#diff-78eb3bd39cd9ce0f38f5648368b3c258b8aab36039ec050f41eabe1497d46e1cR108. I'll release a fix later today.
I've updated the bot. It should work correctly now
I'm going to close this, but feel free to open it again if the issue is still happening.
I am getting these errors now, the bot does reply but it takes longer each reply and it disconnects and reconnects.
WARNING discord.gateway Shard ID None heartbeat blocked fo more than 180 seconds. Loop thread traceback (most recent call last): File "/home/user/text-generation-webui/bot.py", line 254, in
client.run(bot_args.token if bot_args.token else TOKEN, root_logger=True)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/client.py", line 860, in run
asyncio.run(runner())
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/runners.p", line 44, in run
return loop.run_until_complete(main)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/base_evens.py", line 636, in run_until_complete
self.run_forever()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/base_evens.py", line 603, in run_forever
self._run_once()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/base_evens.py", line 1906, in _run_once
handle._run()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/asyncio/events.py, line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/app_commands/tree.py", line 1089, in wrapper
await self._call(interaction)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/app_commands/tree.py", line 1248, in _call
await command._invoke_with_namespace(interaction, namespace)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/ext/commands/hybrid.py", line 438, in _invoke_with_namespace
value = await self._do_call(ctx, ctx.kwargs) # type: ignore
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/disord/app_commands/commands.py", line 842, in _do_call
return await self._callback(interaction, params) # type: ignore
File "/home/user/text-generation-webui/bot.py", line 203, in reply
await llm_gen(ctx, queues)
File "/home/user/text-generation-webui/bot.py", line 124, in llm_gen
for resp in chatbot_wrapper(user_input):
File "/home/user/text-generation-webui/modules/chat.py", line 143, in chabot_wrapper
for reply in generate_reply(f"{prompt}{' ' if len(cumulative_reply) > 0 els ''}{cumulative_reply}", generate_state, eos_token=eos_token, stopping_strings=topping_strings):
File "/home/user/text-generation-webui/modules/text_generation.py", line 53, in generate_reply
for reply in shared.model.generate_with_streaming(context=question, **generte_params):
File "/home/user/text-generation-webui/modules/llamacpp_model_alternativepy", line 61, in generate_with_streaming
for token in generator:
File "/home/user/text-generation-webui/modules/callbacks.py", line 85, innext
obj = self.q.get(True, None)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/queue.py", line 11, in get
self.not_empty.wait()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/threading.py", lie 320, in wait
waiter.acquire()