BAAI-DCAI / Bunny

A family of lightweight multimodal models.
Apache License 2.0
799 stars 61 forks source link

convert to gguf for llama.cpp #80

Closed zhaohengxing closed 1 month ago

zhaohengxing commented 1 month ago

Can anyone give some details on how to convert these models like Bunny-v1.0-4B, Bunny-v1.1-4B, to gguf for llama.cpp?

Isaachhh commented 1 month ago

llama.cpp doesn't support S^2-Wrapper now.

You can load our 384x384 model with llama.cpp.

Bunny-v1.0-4B: https://huggingface.co/BAAI/Bunny-v1_0-4B-gguf

zhaohengxing commented 1 month ago

Thank you. Could you provide a slimilar script like llava-surgery.py in llama.cpp, that can be used to convert Bunny-v1.0-4B?

udayakumar-cyb commented 1 month ago

How do we convert a fine tuned Bunny-v1.0-4B model to GGUF? I tried with the scripts in Llama.cpp. llava-surgery-v2.py provided by llama.cpp seems to be working fine for this model but it fails to convert to GGUF in the next step. Attaching the logs below:

WARNING:convert:Unexpected tensor name: model.vision_tower.vision_tower.vision_model.post_layernorm.weight - skipping Traceback (most recent call last): File "/home/ubuntu/finetuning-bunny/llama.cpp/./convert.py", line 1714, in <module> main() File "/home/ubuntu/finetuning-bunny/llama.cpp/./convert.py", line 1701, in main ftype = pick_output_type(model, args.outtype) File "/home/ubuntu/finetuning-bunny/llama.cpp/./convert.py", line 1307, in pick_output_type wq_type = model[gguf.TENSOR_NAMES[gguf.MODEL_TENSOR.ATTN_Q].format(bid=0) + ".weight"].data_type KeyError: 'blk.0.attn_q.weight'

Isaachhh commented 1 month ago

https://github.com/BAAI-DCAI/Bunny/blob/main/script/conversion_to_GGUF.md

zhaohengxing commented 1 month ago

Thanks for your help!