Closed YenRaven closed 1 year ago
Still failing, but with a different model. WSL Ubuntu
Bloke Uncensored Wizard LM 30B
failed to load character annoy metadata, generating from scratch... building annoy index took 0.0026776790618896484 seconds... Output generated in 73.15 seconds (7.52 tokens/s, 550 tokens, context 287, seed 1240699969) Traceback (most recent call last): File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/routes.py", line 422, in run_predict output = await app.get_blocks().process_api( File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/blocks.py", line 1323, in process_api result = await self.call_function( File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/blocks.py", line 1067, in call_function prediction = await utils.async_iteration(iterator) File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 336, in async_iteration return await iterator.anext() File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 329, in anext return await anyio.to_thread.run_sync( File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "/home/perplexity/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 312, in run_sync_iterator_async return next(iterator) File "/home/perplexity/text-generation-webui/modules/chat.py", line 327, in generate_chat_reply_wrapper for i, history in enumerate(generate_chat_reply(text, shared.history, state, regenerate, _continue, loading_message=True)): File "/home/perplexity/text-generation-webui/modules/chat.py", line 321, in generate_chat_reply for history in chatbot_wrapper(text, history, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message): File "/home/perplexity/text-generation-webui/modules/chat.py", line 230, in chatbot_wrapper prompt = apply_extensions('custom_generate_chat_prompt', text, state, *kwargs) File "/home/perplexity/text-generation-webui/modules/extensions.py", line 193, in apply_extensions return EXTENSION_MAP[typ](args, kwargs) File "/home/perplexity/text-generation-webui/modules/extensions.py", line 80, in _apply_custom_generate_chat_prompt return extension.custom_generate_chat_prompt(text, state, kwargs) File "/home/perplexity/text-generation-webui/extensions/annoy_ltm/script.py", line 685, in custom_generate_chat_prompt return generator.custom_generate_chat_prompt(user_input, state, **kwargs) File "/home/perplexity/text-generation-webui/extensions/annoy_ltm/script.py", line 543, in custom_generate_chat_prompt loaded_history_last_index = index_to_history_position[loaded_history_items-1] KeyError: -1
@bbecausereasonss Would you be willing to check out the potential fix ☝️ and see if that works for you. Also if the error does indeed go away, please check that the annoy database is not being rebuilt every prompt. You can do this by setting "annoy_ltm-logger_level": 2
in your settings.json and check the console log for messages saying the hashes check passed but the db has to be rebuilt.
That fix did not work for me actually, I posted this after pulling and retrying it. How can I rebuild the DB? Has it even been built? I'm a little confused. I was never able to get it to run inference at all.
I apologize, I didn't mean the last fix in main branch. I have a possible fix for this issue committed to this branch https://github.com/YenRaven/annoy_ltm/tree/fix-11-check-for-possible-empty-annoy-db-before-attempting-working-with-it
I apologize, I didn't mean the last fix in main branch. I have a possible fix for this issue committed to this branch https://github.com/YenRaven/annoy_ltm/tree/fix-11-check-for-possible-empty-annoy-db-before-attempting-working-with-it
Gotcha, I'll try it. Thanks!
Looks like the new fix worked.
building annoy index took 0.0027000904083251953 seconds... Output generated in 8.71 seconds (1.15 tokens/s, 10 tokens, context 73, seed 922527240) building annoy index took 0.02743387222290039 seconds...
Thank you for your help!
Getting this issue, again... but slightly different. Was the fixed branch merged to main?
failed to load character annoy metadata, generating from scratch... building annoy index took 0.018998384475708008 seconds... Traceback (most recent call last): File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\gradio\routes.py", line 427, in run_predict output = await app.get_blocks().process_api( File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\gradio\blocks.py", line 1323, in process_api result = await self.call_function( File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\gradio\blocks.py", line 1067, in call_function prediction = await utils.async_iteration(iterator) File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\gradio\utils.py", line 336, in async_iteration return await iterator.anext() File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\gradio\utils.py", line 329, in anext return await anyio.to_thread.run_sync( File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) File "C:\Users\xxxx\Deep\TextGen\installer_files\env\lib\site-packages\gradio\utils.py", line 312, in run_sync_iterator_async return next(iterator) File "C:\Users\xxxx\Deep\TextGen\text-generation-webui\modules\chat.py", line 332, in generate_chat_reply_wrapper for i, history in enumerate(generate_chat_reply(text, shared.history, state, regenerate, _continue, loading_message=True)): File "C:\Users\xxxx\Deep\TextGen\text-generation-webui\modules\chat.py", line 317, in generate_chat_reply for history in chatbot_wrapper(text, history, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message): File "C:\Users\xxxx\Deep\TextGen\text-generation-webui\modules\chat.py", line 226, in chatbot_wrapper prompt = apply_extensions('custom_generate_chat_prompt', text, state, *kwargs) File "C:\Users\xxxx\Deep\TextGen\text-generation-webui\modules\extensions.py", line 193, in apply_extensions return EXTENSION_MAP[typ](args, kwargs) File "C:\Users\xxxx\Deep\TextGen\text-generation-webui\modules\extensions.py", line 80, in _apply_custom_generate_chat_prompt return extension.custom_generate_chat_prompt(text, state, kwargs) File "C:\Users\xxxx\Deep\TextGen\text-generation-webui\extensions\annoy_ltm\script.py", line 494, in custom_generate_chat_prompt return generator.custom_generate_chat_prompt(user_input, state, **kwargs) File "C:\Users\xxxx\Deep\TextGen\text-generation-webui\extensions\annoy_ltm\script.py", line 395, in custom_generate_chat_prompt if shared.soft_prompt: AttributeError: module 'modules.shared' has no attribute 'soft_prompt'
_Originally posted by @bbecausereasonss in https://github.com/YenRaven/annoy_ltm/issues/7#issuecomment-1561573597_
Looking in the code it seems that somehow the hash check is passing and an annoy database with 0 items is being loaded. This results in attempting to set the last loaded index to -1 which is an invalid index.