Closed Fusseldieb closed 1 year ago
Temporarily "fixed" by going into "webui.py" and setting:
CMD_FLAGS = '--chat'
to
CMD_FLAGS = '--model wojtab_llava-7b-v0-4bit-128g --chat'
This doesn't solve the bug, just remedies it
THANK YOU! <3 thank you kind internet stranger!
Same error here when enabling multimodal.
(And @Fusseldieb 's workaround does work thanks)
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.
This is still an issue.
bin D:\text-generation-webui\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll
2023-10-12 18:48:47 INFO:Loading settings from settings.yaml...
2023-10-12 18:48:47 INFO:Loading the extension "gallery"...
Starting streaming server at ws://127.0.0.1:5005/api/v1/stream
2023-10-12 18:48:47 INFO:Loading the extension "send_pictures"...
Starting API at http://127.0.0.1:5000/api
2023-10-12 18:48:50 INFO:Loading the extension "multimodal"...
Traceback (most recent call last):
File "D:\text-generation-webui\server.py", line 230, in <module>
create_interface()
File "D:\text-generation-webui\server.py", line 141, in create_interface
extensions_module.create_extensions_block() # Extensions block
File "D:\text-generation-webui\modules\extensions.py", line 192, in create_extensions_block
extension.ui()
File "D:\text-generation-webui\extensions\multimodal\script.py", line 98, in ui
multimodal_embedder = MultimodalEmbedder(params)
File "D:\text-generation-webui\extensions\multimodal\multimodal_embedder.py", line 27, in __init__
pipeline, source = load_pipeline(params)
File "D:\text-generation-webui\extensions\multimodal\pipeline_loader.py", line 34, in load_pipeline
model_name = shared.args.model.lower()
AttributeError: 'NoneType' object has no attribute 'lower'
Press any key to continue . . .
Edit: Multimodal extension mentions using the commandline to start the server with the required arguments.
Same here:
2023-11-23 20:41:02 INFO:Loading the extension "multimodal"...
Traceback (most recent call last):
File "C:\AITools\text-generation-webui\server.py", line 244, in <module>
create_interface()
File "C:\AITools\text-generation-webui\server.py", line 142, in create_interface
extensions_module.create_extensions_block() # Extensions block
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AITools\text-generation-webui\modules\extensions.py", line 192, in create_extensions_block
extension.ui()
File "C:\AITools\text-generation-webui\extensions\multimodal\script.py", line 99, in ui
multimodal_embedder = MultimodalEmbedder(params)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AITools\text-generation-webui\extensions\multimodal\multimodal_embedder.py", line 27, in __init__
pipeline, source = load_pipeline(params)
^^^^^^^^^^^^^^^^^^^^^
File "C:\AITools\text-generation-webui\extensions\multimodal\pipeline_loader.py", line 34, in load_pipeline
model_name = shared.args.model.lower()
^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower
This is what my CMD_Flags.txt file has to load the model:
--multimodal-pipeline llava-v1.5-13b
If you are using llava-v1.5 you need to open the config.json file and change:
"model_type": "llava",
to
"model_type": "llama",
Describe the bug
If I check the multimodal option and reload, it crashes.
Is there an existing issue for this?
Reproduction
Screenshot
No response
Logs
System Info