Closed Dyl777 closed 7 months ago
are you talking about LLAVA ? because basic support has been merged. see https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/README.md
Thks for pointing that out but not just for LLAMA V1.5, am referring to a format for MLLMs in general being released
@Dyl777 there is no llama v1.5 format? I guess you might be talking about llava 1.5 or vicuna 1.5 which many mllms use as there llm. however so far any mllm that is similar to llava like sharegpt4v or obsidian should work currently. However these models work the same way but just use different clip encoders or different llms.
Currently tho, any other architecture wont work but i think support should come soon.
This issue was closed because it has been inactive for 14 days since being marked as stale.
Curious if MLLMs can work on it. I am already supposing LLAMA V1.5 can't . I can suggest checking out more efficient MLLM models like X-LLM