ill13 / AutoSave

An auto save extension for text generated with the oobabooga WebUI
21 stars 3 forks source link

Worked great with first model. Swapped to another model. Won't work at all. #3

Open left1000 opened 8 months ago

left1000 commented 8 months ago

So I did a conversation with my chatbot in solar 7B .awq and this extension worked fine Then I switched to a .gptq quant of the same model and continued the conversation and this is what I got

Traceback (most recent call last): File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\queueing.py", line 407, in call_prediction output = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 226, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1550, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1199, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 519, in async_iteration return await iterator.__anext__() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 512, in __anext__ return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 2134, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 495, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 649, in gen_wrapper yield from f(*args, **kwargs) File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\modules\chat.py", line 365, in generate_chat_reply_wrapper for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True, for_ui=True)): File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\modules\chat.py", line 333, in generate_chat_reply for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message, for_ui=for_ui): File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\modules\chat.py", line 301, in chatbot_wrapper output['visible'][-1][1] = apply_extensions('output', output['visible'][-1][1], state, is_chat=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\modules\extensions.py", line 232, in apply_extensions return EXTENSION_MAP[typ](*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\modules\extensions.py", line 90, in _apply_string_extensions text = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\extensions\AutoSave\script.py", line 52, in output_modifier save_data(string,timestamp=False) File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\extensions\AutoSave\script.py", line 30, in save_data f.write(json.dumps({"model": model, "adapter": adapter, "prompt" : myprompt, "reply":string} , indent=2 )) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\json\__init__.py", line 238, in dumps **kw).encode(obj) ^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\json\encoder.py", line 202, in encode chunks = list(chunks) ^^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\json\encoder.py", line 432, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\json\encoder.py", line 406, in _iterencode_dict yield from chunks File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\json\encoder.py", line 439, in _iterencode o = _default(o) ^^^^^^^^^^^ File "V:\AI-local-LLM-chatbot-stuff-no-spaces-for-miniconda\text-generation-webui-main\installer_files\env\Lib\json\encoder.py", line 180, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type method is not JSON serializable

Additionally no new information was added to the log. I'm not sure what I did wrong or how to fix it.

left1000 commented 8 months ago

Using another model now, .awq again, turned autosave back on, started new conversation, autosave is working. Not sure if .gptq breaks this extension entirely, or merely swapping models mid conversation breaks it.

left1000 commented 8 months ago

Swapped mid conversation from one .awq model to a totally different .awq model, autosave didn't break, maybe it just hates 4bit .gtpq quant variants?