jmoney7823956789378 / oobabot

A Discord bot which talks to Large Language Model AIs running on oobabooga's text-generation-webui
MIT License
15 stars 3 forks source link

Unexpected Content-Type encountered: text/event-stream; Appearing after each message? #11

Open Inou-Yumiko opened 5 months ago

Inou-Yumiko commented 5 months ago

Greetings, I had been using an old (1.9) version in another computer for quite a while, and i just installed on my new one with the new config file (2.3 I believe).

I ran into an issue (minor maybe?) that happens after each reply; It doesn't crash or cause fatal error, but this is likely causing the bot to stop at 1 sentence only per answer.

Examples:

[Nyanami]: Guess? Sure! I love books! They have a lot of fun things inside them! Unexpected Content-Type encountered: text/event-stream; charset=utf-8. Response: 2024-04-09 10:57:34,046 DEBUG Response to 稲生 弓子 done! tokens: 0, time: 4.84s, latency: 3.82s, rate: 0.00 tok/s

[Nyanami]: facepalm The usual eight o'clock bedtime.Unexpected Content-Type encountered: text/event-stream; charset=utf-8. Response: 2024-04-09 11:02:01,181 DEBUG Response to Yumi done! tokens: 0, time: 4.55s, latency: 4.55s, rate: 0.00 tok/s

[稲生 弓子]: Do you have a favorite book?

[Nyanami]: Meeep~ no~ I couldn't say!!! Unexpected Content-Type encountered: text/event-stream; charset=utf-8. Response: 2024-04-09 11:07:15,615 DEBUG Response to 稲生 弓子 done! tokens: 0, time: 4.38s, latency: 4.38s, rate: 0.00 tok/s

jmoney7823956789378 commented 5 months ago

This isn't necessarily a bug, but more-so breadcrumbs left from when oobabot was adapted to openai. The unexpected content-type is due to the server sending 0 content, along with a finish_reason (since this is streamed, that's how we know it's done). It's proooobably not causing single-sentence responses, because I do get some pretty lengthy ones when I ask for them... but i'll try my best and work out the kinks when I get the chance.

Inou-Yumiko commented 5 months ago

I see, thanks! I wasn't sure if it was either causing reply to stop, or being caused by the stopping reply. I guess it is the latter.

Edit: is this also leftovers?

[2024-04-09 13:18:52,544 ERROR Exception while running <coroutine object DiscordBot._send_text_response_in_channel at 0x0000022C0A0C89A0> response: not enough values to unpack (expected 2, got 1) ....

File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 579, in _send_response_message (sentence, abort_response) = self._filter_immersion_breaking_lines(response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 701, in _filter_immersion_breaking_lines username, remaining_text = match.groups() ^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 2, got 1)

On an unrelated topic, are there any settings to enable bot reply to other bots? We put various related bots in the same room with different models and see how they stack against each other, but always have to ping them to make a reply e.g. @bot1 after each message so they respond.

xBelladonna commented 5 months ago

Oh, I fixed this bug in a commit I haven't pushed yet. I'll fix it in a PR soon. It's purely cosmetic and doesn't have any effect on functionality, only appearing in logs.

jmoney7823956789378 commented 5 months ago

are there any settings to enable bot reply to other bots?

There is! However, there's a small bug in the way messages are queued for generation (at least for tabbyAPI, not sure about ooba) where the bot will freak out if it receives two messages at once. both requests will be generated at the same time but not necessarily streamed to their respective message ID, so you'll end up with two jumbled messages. Also your computer will get very hot.

Inou-Yumiko commented 5 months ago

Oh, I fixed this bug in a commit I haven't pushed yet. I'll fix it in a PR soon. It's purely cosmetic and doesn't have any effect on functionality, only appearing in logs.

Ah, I see. I thought it was causing the sentence to not be sent. I guess it is something else then. The bot seems to be cut off after one sentence, even though next line doesn't appear to have any issues. I'll mess around with settings a bit to find out.

image

image

image

image

As for Ooba, last i tested (way back before when Chris still updated the code) The ooba would simply not respond if they had 2 messages queued at the same time, it dumped them to the void. My comment was more on that several people added their own bots on their own PC, but they won't notice or interact with each other unless prompted.

xBelladonna commented 5 months ago

I've been sick (again) lately and haven't had a lot of time to look at this, but there are a few notable things in your post @Inou-Yumiko. That exception visible in your screenshots seems to be a different one that may be more relevant than the unexpected content-type. Do you think you could copy and paste the full traceback? That'll give me more to work with and hopefully fix whatever that is.

As far as my testing has gone with my latest code (using Oobabooga as a backend), sending several messages at once does queue them up, but the bot does not return coherent responses anyway, since it does not wait for itself to reply to the messages in order, and then rebuild the chat history based on the new content including its response. I'm working on implementing this functionality by adding "aggregation windows" so the bot waits for just a small amount of time to accumulate messages and queue the responses in the correct order, building the appropriate history required. More to come once I get better.

With regards to the unexpected content-type message, that has been fixed in my latest develop branch, with the switch over to the OpenAI-compatible API exclusively. The legacy Oobabooga API was removed from the text generation webui some time late last year and since the generative AI movement is moving so fast, I figured there was no point in continuing to support something that hasn't existed for half a year now. Please let me know if legacy API support is something that would still be desired.

Be aware that the develop branch is where I essentially push my bleeding edge features and fixes. Though I do some testing on a private branch before pushing to develop, there still might be some bugs I haven't caught yet. If you want to try using it, you will need to regenerate your configuration since a LOT has changed, but as long as you do that and copy everything over, things should work fine.

Inou-Yumiko commented 5 months ago

Hello, sorry I was busy. Here are two examples. Bot cuts the last sentence generated from reply as usual.

image

[Nyanami]: 毡は私が好み。あ2024-04-18 14:46:55,707 ERROR Exception while running <coroutine object DiscordBot._send_text_response_in_channel at 0x000002AF7D1B6F20> response: not enough values to unpack (expected 2, got 1) Stack (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\user\AppData\Local\Programs\Python\Python312\Scripts\oobabot.exe__main__.py", line 7, in sys.exit(main()) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\oobabot.py", line 268, in main run_cli() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\oobabot.py", line 264, in run_cli oobabot.start() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\oobabot.py", line 100, in start asyncio.run(self.runtime.run()) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 194, in run return runner.run(main) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 672, in run_until_complete self.run_forever() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\windows_events.py", line 321, in run_forever super().run_forever() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 639, in run_forever self._run_once() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 1985, in _run_once handle._run() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\events.py", line 88, in _run self._context.run(self._callback, self._args) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\discord\client.py", line 441, in _run_event await coro(args, *kwargs) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 245, in on_message await self._handle_response( File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 311, in _handle_response fancy_logger.get().error( 2024-04-18 14:46:55,707 ERROR Ignoring exception in on_message Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\discord\client.py", line 441, in _run_event await coro(args, **kwargs) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 245, in on_message await self._handle_response( File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 316, in _handle_response raise task.exception() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 514, in _send_text_response_in_channel ) = await self._send_response_message( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 579, in _send_response_message (sentence, abort_response) = self._filter_immersion_breaking_lines(response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 701, in _filter_immersion_breaking_lines username, remaining_text = match.groups() ^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 2, got 1)

image

[Nyanami]: Oh, I see!

[Nyanami]: Star Rail huh?

[2024-04-18 14:47:44,726 ERROR Exception while running <coroutine object DiscordBot._send_text_response_in_channel at 0x000002AF7D1B72E0> response: not enough values to unpack (expected 2, got 1) Stack (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\user\AppData\Local\Programs\Python\Python312\Scripts\oobabot.exe__main__.py", line 7, in sys.exit(main()) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\oobabot.py", line 268, in main run_cli() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\oobabot.py", line 264, in run_cli oobabot.start() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\oobabot.py", line 100, in start asyncio.run(self.runtime.run()) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 194, in run return runner.run(main) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 672, in run_until_complete self.run_forever() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\windows_events.py", line 321, in run_forever super().run_forever() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 639, in run_forever self._run_once() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 1985, in _run_once handle._run() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\asyncio\events.py", line 88, in _run self._context.run(self._callback, self._args) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\discord\client.py", line 441, in _run_event await coro(args, *kwargs) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 245, in on_message await self._handle_response( File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 311, in _handle_response fancy_logger.get().error( 2024-04-18 14:47:44,727 ERROR Ignoring exception in on_message Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\discord\client.py", line 441, in _run_event await coro(args, **kwargs) File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 245, in on_message await self._handle_response( File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 316, in _handle_response raise task.exception() File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 514, in _send_text_response_in_channel ) = await self._send_response_message( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 579, in _send_response_message (sentence, abort_response) = self._filter_immersion_breaking_lines(response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\oobabot\discord_bot.py", line 701, in _filter_immersion_breaking_lines username, remaining_text = match.groups() ^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 2, got 1)

AlanMW commented 5 months ago

Just wanted to add that I frequently get the same error.

Traceback (most recent call last):
   File "/home/user/.local/lib/python3.10/site-packages/discord/client.py", line 441, in _run_event
     await coro(*args, **kwargs)
   File "/home/user/.local/lib/python3.10/site-packages/oobabot/discord_bot.py", line 245, in on_message
     await self._handle_response(
   File "/home/user/.local/lib/python3.10/site-packages/oobabot/discord_bot.py", line 316, in _handle_response
     raise task.exception()
   File "/home/user/.local/lib/python3.10/site-packages/oobabot/discord_bot.py", line 514, in _send_text_response_in_channel
     ) = await self._send_response_message(
   File "/home/user/.local/lib/python3.10/site-packages/oobabot/discord_bot.py", line 579, in _send_response_message
     (sentence, abort_response) = self._filter_immersion_breaking_lines(response)
   File "/home/user/.local/lib/python3.10/site-packages/oobabot/discord_bot.py", line 701, in _filter_immersion_breaking_lines
     username, remaining_text = match.groups()# + ('',)  # Assign default value ''
 ValueError: not enough values to unpack (expected 2, got 1)

if I modify ~ line 701 in discord_bot.py to username, remaining_text = match.groups() + ('',) it will stop throwing the error, but then it doesnt know where to trim the message appropriately.

Edit, it might just be an issue with the groups defined in the regex. This: username_pattern = re.compile(r'\[(.*?)\]:? (.*)') gave me more success. But some of the responses were still a little funky. Not sure if that was from the regex change or not.

xBelladonna commented 4 months ago

@AlanMW Yeah the immersion-breaking filter's regex is... fugged. For lack of a better term. This is partially my fault and partially because the original bot was a very early work and wasn't quite designed to be robust and able to handle the wide variety of models and use cases people have. I tried to touch only what I had to in order to make the changes I wanted, and that turned out to be a bad compromise, vs. redesigning large parts of the bot, which is what has happened on my fork.

I've just pushed Release v0.3.2 to main which has a number of fixes and new features, including a dynamically compiled regex for the immersion-breaking filter that's dependent on what the user configures their prompt history blocks as, and the name of the Discord user being responded to by the bot. It also has an option to disable the filter entirely if you find it causes more problems than it's worth.

Feel free to give it a try. I highly recommend regenerating your config and reading the release changelog docs from v0.3.0 to v.0.3.2 because I describe in-detail how to properly use and configure the new stuff. I would open a PR to fix this fork but unfortunately the rapid development of my own fork has outpaced this one and at this point my changes can no longer be automatically merged.