xNul / chat-llama-discord-bot

A Discord Bot for chatting with LLaMA, Vicuna, Alpaca, MPT, or any other Large Language Model (LLM) supported by text-generation-webui or llama.cpp.
https://discord.gg/TcRGDV754Y
MIT License
118 stars 23 forks source link

Bot throwing an error and freezing #13

Closed FellowChello closed 1 year ago

FellowChello commented 1 year ago

Hey, was trying to get this running but no luck. The web UI works with no issues. Tried 3 different models and no luck.

[2023-05-06 22:08:51] [INFO ] root: reply requested: '<@703797450640719923>: {'text': 'hey, how are you?', 'state': {'max_new_tokens': 200, 'seed': -1.0, 'temperature': 0.7, 'top_p': 0.1, 'top_k': 40, 'typical_p': 1, 'repetition_penalty': 1.18, 'encoder_repetition_penalty': 1, 'no_repeat_ngram_size': 0, 'min_length': 0, 'do_sample': True, 'penalty_alpha': 0, 'num_beams': 1, 'length_penalty': 1, 'early_stopping': False, 'add_bos_token': True, 'ban_eos_token': False, 'skip_special_tokens': True, 'truncation_length': 2048, 'custom_stopping_strings': '', 'name1': 'You', 'name2': 'Assistant', 'greeting': '', 'context': 'This is a conversation with your Assistant. The Assistant is very helpful and is eager to chat with you and answer your questions.', 'turn_template': '', 'chat_prompt_size': 2048, 'chat_generation_attempts': 1, 'stop_at_newline': False, 'mode': 'cai-chat'}, 'regenerate': False, '_continue': False}' Traceback (most recent call last): File "C:\TCHT\oobabooga_windows\text-generation-webui\modules\text_generation.py", line 232, in generate_reply_HF if not state['stream']: KeyError: 'stream' Output generated in 0.00 seconds (0.00 tokens/s, 0 tokens, context 42, seed 10024943) ERROR:discord.ext.commands.bot:Ignoring exception in command reply Traceback (most recent call last): File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\app_commands\commands.py", line 842, in _do_call return await self._callback(interaction, **params) # type: ignore File "C:\TCHT\oobabooga_windows\text-generation-webui\bot.py", line 349, in reply await llm_gen(ctx, queues) File "C:\TCHT\oobabooga_windows\text-generation-webui\bot.py", line 263, in llm_gen resp_clean = resp[len(resp)-1][1] IndexError: list index out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\ext\commands\hybrid.py", line 438, in _invoke_with_namespace value = await self._do_call(ctx, ctx.kwargs) # type: ignore File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\app_commands\commands.py", line 860, in _do_call raise CommandInvokeError(self, e) from e discord.app_commands.errors.CommandInvokeError: Command 'reply' raised an exception: IndexError: list index out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\ext\commands\hybrid.py", line 438, in _invoke_with_namespace value = await self._do_call(ctx, ctx.kwargs) # type: ignore File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\app_commands\commands.py", line 860, in _do_call raise CommandInvokeError(self, e) from e discord.ext.commands.errors.HybridCommandError: Hybrid command raised an error: Command 'reply' raised an exception: IndexError: list index out of range [2023-05-06 22:08:54] [ERROR ] discord.ext.commands.bot: Ignoring exception in command reply Traceback (most recent call last): File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\app_commands\commands.py", line 842, in _do_call return await self._callback(interaction, **params) # type: ignore File "C:\TCHT\oobabooga_windows\text-generation-webui\bot.py", line 349, in reply await llm_gen(ctx, queues) File "C:\TCHT\oobabooga_windows\text-generation-webui\bot.py", line 263, in llm_gen resp_clean = resp[len(resp)-1][1] IndexError: list index out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\ext\commands\hybrid.py", line 438, in _invoke_with_namespace value = await self._do_call(ctx, ctx.kwargs) # type: ignore File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\app_commands\commands.py", line 860, in _do_call raise CommandInvokeError(self, e) from e discord.app_commands.errors.CommandInvokeError: Command 'reply' raised an exception: IndexError: list index out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\ext\commands\hybrid.py", line 438, in _invoke_with_namespace value = await self._do_call(ctx, ctx.kwargs) # type: ignore File "C:\TCHT\oobabooga_windows\installer_files\env\lib\site-packages\discord\app_commands\commands.py", line 860, in _do_call raise CommandInvokeError(self, e) from e discord.ext.commands.errors.HybridCommandError: Hybrid command raised an error: Command 'reply' raised an exception: IndexError: list index out of range

a28218832 commented 1 year ago

Same here. ><


(update)

Here's how I make it kind of working: Try to add the code below before the first except Exception: which is in 'text-generation-webui\modules\text_generation.py'

    except KeyError:
        def generate_with_callback(callback=None, **kwargs):
            kwargs['stopping_criteria'].append(Stream(callback_func=callback))
            clear_torch_cache()
            with torch.no_grad():
                shared.model.generate(**kwargs)

        def generate_with_streaming(**kwargs):
            return Iteratorize(generate_with_callback, kwargs, callback=None)

        with generate_with_streaming(**generate_params) as generator:
            for output in generator:
                if shared.soft_prompt:
                    output = torch.cat((input_ids[0], output[filler_input_ids.shape[1]:]))

                yield get_reply_from_output_ids(output, input_ids, original_question, state)
                if output[-1] in eos_token_ids:
                    break

I guess the error is caused by some update, that lose the keyword stream in state. Due to the context, losing the keyword means the value should be false. This is the reason I add the code there.

BTW, since I'm not sure how does it look like in a normal situation, this might not be the best way to solve this error.

FellowChello commented 1 year ago

Hey, thank you for that! I implemented our workaround and it will do will OP can resolve this.

xNul commented 1 year ago

Fixed with the latest commit