xBelladonna / oobabot

A Discord bot which talks to Large Language Model AIs using just about any API-enabled backend.
MIT License
5 stars 0 forks source link

Multiple issues with untested configs (with hacky solutions) #2

Open dogarrowtype opened 4 months ago

dogarrowtype commented 4 months ago

Hi! Thanks for making this fork of oobabot! When working, it seems to perform much better with openai-compatible endpoints than other forks.

A few issues came up when trying to use openai as the provider. This is my diff. They're essentially typos.

I think the tested config probably uses oobabooga (with token counts), has streaming responses on, and has a stable diffusion endpoint loaded? It breaks without those things (even when definitely disabled in the config). The token counting mechanism does not get disabled when disabled (try doesn't work as intended).

diff --git a/src/oobabot/discord_bot.py b/src/oobabot/discord_bot.py
index 05c3ac1..57e49ec 100644
--- a/src/oobabot/discord_bot.py
+++ b/src/oobabot/discord_bot.py
@@ -97,7 +97,7 @@ class DiscordBot(discord.Client):
                 f"Unknown value '{self.prevent_impersonation}' for `prevent_impersonation`. "
                 + "Please fix your configuration."
             )
-        self.stream_responses = discord_settings["stream_responses"].lower()
+        self.stream_responses = discord_settings["stream_responses"] # Binary can't be lowercase
         if self.stream_responses and self.stream_responses not in ["token", "sentence"]:
             raise ValueError(
                 f"Unknown value '{self.stream_responses}' for `stream_responses`. "
@@ -400,6 +400,7 @@ class DiscordBot(discord.Client):
             # sequence. Because we wait to accumulate messages, this ensures the deleted
             # message doesn't trigger a response to whatever the latest message ends up being.
             if not guaranteed_response:
+                skip = False
                 for ignore_prefix in self.ignore_prefixes:
                     if message.body_text.startswith(ignore_prefix):
                         skip = True
@@ -473,9 +474,9 @@ class DiscordBot(discord.Client):
         is_image_coming = None

         # are we creating an image?
-        image_prompt = self.image_generator.maybe_get_image_prompt(message.body_text)
-        if image_prompt:
-            is_image_coming = await self.image_generator.try_session()
+        #image_prompt = self.image_generator.maybe_get_image_prompt(message.body_text)
+        #if image_prompt:
+        #    is_image_coming = await self.image_generator.try_session()

         # Determine if there are images and get descriptions (if Vision is enabled)
         image_descriptions = await self._get_image_descriptions(raw_message)
diff --git a/src/oobabot/prompt_generator.py b/src/oobabot/prompt_generator.py
index 4c298ce..f1803a3 100644
--- a/src/oobabot/prompt_generator.py
+++ b/src/oobabot/prompt_generator.py
@@ -232,10 +232,10 @@ class PromptGenerator:
             guild_name="",
             response_channel=""
         )
-        try:
-            prompt_units = await self.ooba_client.get_token_count(prompt_without_history)
-        except ValueError:
-            prompt_units = len(prompt_without_history)
+        #try:
+        #    prompt_units = await self.ooba_client.get_token_count(prompt_without_history)
+        #except ValueError:
+        prompt_units = len(prompt_without_history)

         # first we process and append the chat transcript
         context_full = False
@@ -281,10 +281,10 @@ class PromptGenerator:
                     {},
                 )

-            try:
-                line_units = await self.ooba_client.get_token_count(line)
-            except ValueError:
-                line_units = len(line)
+            #try:
+            #    line_units = await self.ooba_client.get_token_count(line)
+            #except ValueError:
+            line_units = len(line)

             if line_units >= self.max_context_units - prompt_units:
                 context_full = True
xBelladonna commented 1 month ago

Thank you very much for this! I'm sorry I haven't worked on this for a while, life has been busy, but I've made some updates that should fix these issues.

Please try again with new version and let me know. If not working still, please post logs and/or any info you have, I'll try to fix it!