YenRaven / annoy_ltm

annoy long term memory experiment for oobabooga/text-generation-webui
32 stars 3 forks source link

IndexError: pop index out of range #6

Closed soctib closed 1 year ago

soctib commented 1 year ago

I wasn't able to narrow this one down to any specific cause, but my guess is that it is somehow related with the text generated by the bot:

  File "D:\oobabooga\installer_files\env\lib\site-packages\gradio\routes.py", line 395, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\oobabooga\installer_files\env\lib\site-packages\gradio\blocks.py", line 1193, in process_api
    result = await self.call_function(
  File "D:\oobabooga\installer_files\env\lib\site-packages\gradio\blocks.py", line 930, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\oobabooga\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\oobabooga\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_t
hread
    return await future
  File "D:\oobabooga\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "D:\oobabooga\installer_files\env\lib\site-packages\gradio\utils.py", line 491, in async_iteration
    return next(iterator)
  File "D:\oobabooga\text-generation-webui\modules\chat.py", line 319, in generate_chat_reply_wrapper
    for i, history in enumerate(generate_chat_reply(text, shared.history, state, regenerate, _continue, loading_message=True
)):
  File "D:\oobabooga\text-generation-webui\modules\chat.py", line 313, in generate_chat_reply
    for history in chatbot_wrapper(text, history, state, regenerate=regenerate, _continue=_continue, loading_message=loading
_message):
  File "D:\oobabooga\text-generation-webui\modules\chat.py", line 226, in chatbot_wrapper
    prompt = apply_extensions('custom_generate_chat_prompt', text, state, **kwargs)
  File "D:\oobabooga\text-generation-webui\modules\extensions.py", line 193, in apply_extensions
    return EXTENSION_MAP[typ](*args, **kwargs)
  File "D:\oobabooga\text-generation-webui\modules\extensions.py", line 80, in _apply_custom_generate_chat_prompt
    return extension.custom_generate_chat_prompt(text, state, **kwargs)
  File "D:\oobabooga\text-generation-webui\extensions\annoy_ltm\script.py", line 671, in custom_generate_chat_prompt
    return generator.custom_generate_chat_prompt(user_input, state, **kwargs)
  File "D:\oobabooga\text-generation-webui\extensions\annoy_ltm\script.py", line 657, in custom_generate_chat_prompt
    rows.pop(3 + len(memory_rows))
IndexError: pop index out of range
YenRaven commented 1 year ago

I think I see the issue. https://github.com/YenRaven/annoy_ltm/blob/bfac65383f68c154a2f4bd9fa4e4bd03c1b23f11/script.py#LL657C1-L658C43

        while len(rows) > min_rows and len(encode(''.join(rows))[0]) >= max_length:
            rows.pop(3 + len(memory_rows))

This code is meant to ensure the prompt doesn't overflow the max prompt length of your model. I could see that it's possible if your memory is too long, that it tries to remove all of the chat context from the prompt. Is it possible you have adjusted your settings for this extension? Possibly setting memory to chat ratio to a higher value?

soctib commented 1 year ago

I made no adjustments on the settings of annoy itself. It seems to happen once prompt + chat together have a certain length.

YenRaven commented 1 year ago

@soctib Would you be willing to check out the branch for the fix ☝️ and see if it fixes the issue?

YenRaven commented 1 year ago

Tested fix from #14 and it is not creating a new issue, so I will close this issue with that merge for now. If you still experience this issue, please leave a comment for me to re-open it.

soctib commented 1 year ago

Sorry for the delay. I can confirm all errors are gone.