oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
39.61k stars 5.21k forks source link

Multimodal fails to load #3345

Closed Fusseldieb closed 1 year ago

Fusseldieb commented 1 year ago

Describe the bug

If I check the multimodal option and reload, it crashes.

Is there an existing issue for this?

Reproduction

  1. Load a Llava model, with ExLlama, for example
  2. Go into Session
  3. Check multimodal and restart
  4. Fire everywhere

Screenshot

No response

Logs

bin C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
2023-07-28 06:05:31 INFO:Loading the extension "gallery"...
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
2023-07-28 06:05:54 INFO:Loading wojtab_llava-7b-v0-4bit-128g...
2023-07-28 06:05:59 INFO:Loaded the model in 5.08 seconds.

Closing server running on port: 7860
2023-07-28 06:06:05 INFO:Loading the extension "gallery"...
2023-07-28 06:06:05 INFO:Loading the extension "multimodal"...
Traceback (most recent call last):
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 1187, in <module>
    create_interface()
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py", line 1086, in create_interface
    extensions_module.create_extensions_block()
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\extensions.py", line 175, in create_extensions_block
    extension.ui()
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\extensions\multimodal\script.py", line 98, in ui
    multimodal_embedder = MultimodalEmbedder(params)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\extensions\multimodal\multimodal_embedder.py", line 27, in __init__
    pipeline, source = load_pipeline(params)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\extensions\multimodal\pipeline_loader.py", line 34, in load_pipeline
    model_name = shared.args.model.lower()
AttributeError: 'NoneType' object has no attribute 'lower'
Pressione qualquer tecla para continuar. . .

System Info

Windows 11 22H2
NVIDIA RTX 2080 8GB
8GB RAM
.. usual stuff ...

On the latest version of text-generation-webui. Updated it via the updater script just to be sure.
Fusseldieb commented 1 year ago

Temporarily "fixed" by going into "webui.py" and setting:

CMD_FLAGS = '--chat'

to

CMD_FLAGS = '--model wojtab_llava-7b-v0-4bit-128g --chat'

This doesn't solve the bug, just remedies it

RandomInternetPreson commented 1 year ago

THANK YOU! <3 thank you kind internet stranger!

LoopControl commented 1 year ago

Same error here when enabling multimodal.

(And @Fusseldieb 's workaround does work thanks)

github-actions[bot] commented 1 year ago

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

Chanka0 commented 11 months ago

This is still an issue.

bin D:\text-generation-webui\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll
2023-10-12 18:48:47 INFO:Loading settings from settings.yaml...
2023-10-12 18:48:47 INFO:Loading the extension "gallery"...
Starting streaming server at ws://127.0.0.1:5005/api/v1/stream
2023-10-12 18:48:47 INFO:Loading the extension "send_pictures"...
Starting API at http://127.0.0.1:5000/api
2023-10-12 18:48:50 INFO:Loading the extension "multimodal"...
Traceback (most recent call last):
  File "D:\text-generation-webui\server.py", line 230, in <module>
    create_interface()
  File "D:\text-generation-webui\server.py", line 141, in create_interface
    extensions_module.create_extensions_block()  # Extensions block
  File "D:\text-generation-webui\modules\extensions.py", line 192, in create_extensions_block
    extension.ui()
  File "D:\text-generation-webui\extensions\multimodal\script.py", line 98, in ui
    multimodal_embedder = MultimodalEmbedder(params)
  File "D:\text-generation-webui\extensions\multimodal\multimodal_embedder.py", line 27, in __init__
    pipeline, source = load_pipeline(params)
  File "D:\text-generation-webui\extensions\multimodal\pipeline_loader.py", line 34, in load_pipeline
    model_name = shared.args.model.lower()
AttributeError: 'NoneType' object has no attribute 'lower'
Press any key to continue . . .

Edit: Multimodal extension mentions using the commandline to start the server with the required arguments.

H1ghSyst3m commented 10 months ago

Same here:

2023-11-23 20:41:02 INFO:Loading the extension "multimodal"...
Traceback (most recent call last):
  File "C:\AITools\text-generation-webui\server.py", line 244, in <module>
    create_interface()
  File "C:\AITools\text-generation-webui\server.py", line 142, in create_interface
    extensions_module.create_extensions_block()  # Extensions block
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AITools\text-generation-webui\modules\extensions.py", line 192, in create_extensions_block
    extension.ui()
  File "C:\AITools\text-generation-webui\extensions\multimodal\script.py", line 99, in ui
    multimodal_embedder = MultimodalEmbedder(params)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AITools\text-generation-webui\extensions\multimodal\multimodal_embedder.py", line 27, in __init__
    pipeline, source = load_pipeline(params)
                       ^^^^^^^^^^^^^^^^^^^^^
  File "C:\AITools\text-generation-webui\extensions\multimodal\pipeline_loader.py", line 34, in load_pipeline
    model_name = shared.args.model.lower()
                 ^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower
RandomInternetPreson commented 10 months ago

This is what my CMD_Flags.txt file has to load the model:

--multimodal-pipeline llava-v1.5-13b

If you are using llava-v1.5 you need to open the config.json file and change:

"model_type": "llava",

to

"model_type": "llama",