Open q104769424 opened 2 months ago
change
"architectures": [
"BunnyPhiForCausalLM"
],
"attention_dropout": 0.0,
"auto_map": {
"AutoConfig": "configuration_bunny_phi.BunnyPhiConfig",
"AutoModelForCausalLM": "modeling_bunny_phi.BunnyPhiForCausalLM"
},
to
"architectures": [
"PhiForCausalLM"
],
"attention_dropout": 0.0,
The config has already been modified.
When I run
python ../../convert-hf-to-gguf.py Bunny-v1_0-3B
I get
Traceback (most recent call last): File "../../convert-hf-to-gguf.py", line 2593, in <module> main() File "../../convert-hf-to-gguf.py", line 2578, in main model_instance.set_vocab() File "../../convert-hf-to-gguf.py", line 116, in set_vocab self._set_vocab_gpt2() File "../../convert-hf-to-gguf.py", line 508, in _set_vocab_gpt2 tokens, toktypes, tokpre = self.get_vocab_base() File "../../convert-hf-to-gguf.py", line 390, in get_vocab_base tokpre = self.get_vocab_base_pre(tokenizer) File "../../convert-hf-to-gguf.py", line 499, in get_vocab_base_pre raise NotImplementedError("BPE pre-tokenizer was not recognized - update get_vocab_base_pre()") NotImplementedError: BPE pre-tokenizer was not recognized - update get_vocab_base_pre()
How can I solve this? Can you provide Bunny-v1.0-3B.gguf?
Seems that it's because recent updates of llama.cpp
https://github.com/ggerganov/llama.cpp/issues/8649.
Sorry, I followed the instructions and used llama.cpp version b2636, but the conversion still failed. Did you successfully convert Bunny-v1.0-3B? If so, could you directly share the GGUF?
I have tested using the HF DEMO and found that the results of Bunny-v1.1-Llama-3-8B-V and Bunny-v1.0-3B are what I am looking for. However, I discovered that llama.cpp does not currently support S2-Wrapper, so I want to convert Bunny-v1.0-3B to GGUF for use on edge devices (I have tested Bunny-v1_0-4B.gguf and the results were not ideal).
To convert Bunny-v1_0-3B to gguf, I follow the instructions on the GitHub page. However, when I execute the final step:
python ../../convert-hf-to-gguf.py Bunny-v1_0-3B
But I encounter the error:
KeyError: "could not find any of: ['rms_norm_eps']"
along with several other missing format in the config.I think that the configs for Bunny-v1_0-3B and Bunny-v1_0-4B are different, which causes the error when loading the model. Could you please provide the config.json for Bunny-v1_0-3B or a solution to this issue?