Open RodriMora opened 6 months ago
You should run multimodal model with command flags, not just simply check it in Session tab.
You should run multimodal model with command flags, not just simply check it in Session tab.
you mean add --multimodal in the CMD_FLAGS.txt?
Yes, but multimodal extension is now buggy. I failed to load wojtab_llava-7b-v0-4bit-128g model and TheBloke_vicuna-7B-1.1-GPTQ doesn't seems to work.
@Touch-Night When I add --multimodal
in CMD_FLAGS.txt
, I get the following error:
server.py: error: argument --multimodal-pipeline: expected one argument
It seems broken right now, I cannot load multimodal models either.
Thank you for confirming @Touch-Night
@Touch-Night When I add
--multimodal
inCMD_FLAGS.txt
, I get the following error:server.py: error: argument --multimodal-pipeline: expected one argument
you need to load a pipeline with the CMD FLAG example: --multimodal-pipeline llava-7b
Available pipelines ['llava-7b','llava-13b', 'llava-llama-2-13b', 'llava-v1.5-13b', 'llava-v1.5-7b']
Available pipelines ['llava-7b','llava-13b', 'llava-llama-2-13b', 'llava-v1.5-13b', 'llava-v1.5-7b']
what about phi-3-vision instead?
TIA
Available pipelines ['llava-7b','llava-13b', 'llava-llama-2-13b', 'llava-v1.5-13b', 'llava-v1.5-7b']
@Touch-Night what about phi-3-vision instead?
TIA
Install them under extensions/multimodal/pipelines
Default available are: ['llava-7b','llava-13b', 'llava-llama-2-13b', 'llava-v1.5-13b', 'llava-v1.5-7b']
Describe the bug
When I toggle the option for multimodal the software crashes.
Is there an existing issue for this?
Reproduction
Turning on the "multimodal" toggle in the "Session" tab throws this error and crashes textgen webui.
Screenshot
Logs
System Info