Loading model: deepseek-vl-7b-chat
Traceback (most recent call last):
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 2099, in
main()
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 2079, in main
model_class = Model.from_model_architecture(hparams["architectures"][0])
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 215, in from_model_architecture
raise NotImplementedError(f'Architecture {arch!r} not supported!') from None
NotImplementedError: Architecture 'MultiModalityCausalLM' not supported!
Sorry for this dummy question but I did search for some answers and try before. Using llama.cpp
returned
So is there any feasible method? Thx.