ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.43k stars 9.22k forks source link

Feature Request: Architecture "LlavaMistralForCausalLM" not supported! #8533

Closed dafei2017 closed 4 days ago

dafei2017 commented 1 month ago

Prerequisites

Feature Description

Because the model is divided into four parts in https://huggingface.co/microsoft/llava-med-v1.5-mistral-7b/tree/main, the above problems in the process of merging model, how to solve this problem, please?

Motivation

python .\convert-hf-to-gguf.py .\llava-med-v1.5-mistral-7b\ --outfile .\llava-med\ Loading model: llava-med-v1.5-mistral-7b Traceback (most recent call last): File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 1876, in main() File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 1857, in main model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian) File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 50, in init self.model_arch = self._get_model_architecture() File "E:\gyf\offline-model\AR-agent\llama.cpp\convert-hf-to-gguf.py", line 281, in _get_model_architecture raise NotImplementedError(f'Architecture "{arch}" not supported!') NotImplementedError: Architecture "LlavaMistralForCausalLM" not supported!

Possible Implementation

No response

ccslience commented 1 month ago

I met the same issue

github-actions[bot] commented 4 days ago

This issue was closed because it has been inactive for 14 days since being marked as stale.

ccslience commented 4 days ago

邮件已收到,谢谢~                    张静

drzraf commented 2 days ago

Sounds like a valid issue