Got the following error when changing "injection_location" to "AFTER_NORMAL_CONTEXT_BUT_BEFORE_MESSAGES":
`
NEW MEMORIES LOADED IN CHATBOT
('0 days ago, You said:\n'
'"I told you 3 days ago about a person named Coranna Ivanova. Do you remember '
'that? And do you remember what I told you about her?"')
scores (in order) [0.55102134]
Traceback (most recent call last):
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\routes.py", line 442, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1392, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1111, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 346, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 339, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 322, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 691, in gen_wrapper
yield from f(args, kwargs)
File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\chat.py", line 305, in generate_chat_reply_wrapper
for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True)):
File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\chat.py", line 290, in generate_chat_reply
for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message):
File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\chat.py", line 206, in chatbot_wrapper
prompt = apply_extensions('custom_generate_chat_prompt', text, state, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\extensions.py", line 207, in apply_extensions
return EXTENSION_MAP[typ](*args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\extensions.py", line 81, in _apply_custom_generate_chat_prompt
return extension.custom_generate_chat_prompt(text, state, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\extensions\long_term_memory\script.py", line 260, in custom_generate_chat_prompt
augmented_context = _build_augmented_context(memory_context, state["context"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\extensions\long_term_memory\script.py", line 103, in _build_augmented_context
raise ValueError(
ValueError: Cannot use AFTER_NORMAL_CONTEXT_BUT_BEFORE_MESSAGES, token not found in context. Please make sure you're using a proper character json and that you're NOT using the generic 'Assistant' sample character
`
(Unrelated but still a question: Is there a way to speed it up? When adding memories to the context it seems like it would run through the entire conversation again to generate a response which results in A LOT of waiting time until the AI responds)
Got the following error when changing "injection_location" to "AFTER_NORMAL_CONTEXT_BUT_BEFORE_MESSAGES":
`
NEW MEMORIES LOADED IN CHATBOT ('0 days ago, You said:\n' '"I told you 3 days ago about a person named Coranna Ivanova. Do you remember ' 'that? And do you remember what I told you about her?"') scores (in order) [0.55102134]
Traceback (most recent call last): File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\routes.py", line 442, in run_predict output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1392, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1111, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 346, in async_iteration return await iterator.anext() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 339, in anext return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 322, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "C:\Users\Dustin\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\utils.py", line 691, in gen_wrapper yield from f(args, kwargs) File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\chat.py", line 305, in generate_chat_reply_wrapper for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True)): File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\chat.py", line 290, in generate_chat_reply for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message): File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\chat.py", line 206, in chatbot_wrapper prompt = apply_extensions('custom_generate_chat_prompt', text, state, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\extensions.py", line 207, in apply_extensions return EXTENSION_MAP[typ](*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\modules\extensions.py", line 81, in _apply_custom_generate_chat_prompt return extension.custom_generate_chat_prompt(text, state, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\extensions\long_term_memory\script.py", line 260, in custom_generate_chat_prompt augmented_context = _build_augmented_context(memory_context, state["context"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dustin\Documents\Random\oobabooga webui\text-generation-webui-1.5\extensions\long_term_memory\script.py", line 103, in _build_augmented_context raise ValueError( ValueError: Cannot use AFTER_NORMAL_CONTEXT_BUT_BEFORE_MESSAGES, token not found in context. Please make sure you're using a proper character json and that you're NOT using the generic 'Assistant' sample character
`
(Unrelated but still a question: Is there a way to speed it up? When adding memories to the context it seems like it would run through the entire conversation again to generate a response which results in A LOT of waiting time until the AI responds)