BAAI-DCAI / Bunny

A family of lightweight multimodal models.
Apache License 2.0
894 stars 67 forks source link

Please add detailed steps on How to convert the Bunny Family of Models to GGUF? #85

Closed criminact closed 4 months ago

criminact commented 4 months ago

I have tried converting the Bunny model to GGUF using the script - https://github.com/ggerganov/llama.cpp/tree/master/examples/llava (Right now the script is only available for Llava1.5 and 1.6, Moondream & MiniCPM). Please add your conversion script so it can be consumed on edge devices.

Isaachhh commented 4 months ago

We have released the GGUF format of Bunny-v1.0-4B and Bunny-Llama-3-8B-V.

criminact commented 4 months ago

@Isaachhh Right. But we would need the steps how you're converting the model to GGUF since we have FT your vanilla Bunny-v1.0-4B model and want to consume it on edge devices using LLama.cpp.

We want to convert our FT Bunny model to GGUF (which is not possible from the script - https://github.com/ggerganov/llama.cpp/tree/master/examples/llava), Hence your help is required here.

criminact commented 4 months ago

80 Saw this that Llama.cpp doesn't support S2 wrapper yet. But I think this was implemented since Moondream uses SigLip and it works for that model.

Please let me know if you need any help form our side. Having a detailed set of steps to convert a FT bunny model to GGUF would do wonders for us. Thanks in advance.

Isaachhh commented 4 months ago

Llama.cpp supports SigLIP, so we released the GGUF format of Bunny-v1.0-4B and Bunny-Llama-3-8B-V.

Llama.cpp doesn't support S2-Wrapper, so we didn't release the GGUF format of Bunny-v1.1-4B.

For the conversion, there is no trouble because I myself convert Bunny-v1.0-4B and Bunny-Llama-3-8B-V to GGUF format. But I am too busy these days to release an instrcution. I would try my best ASAP.

criminact commented 4 months ago

Cool @Isaachhh. It would be a great help if you could release the script you use to convert the Bunny-v1.0-4B and Bunny-Llama-3-8B-V models to GGUF. I can put up a set of instructions myself using that script in the documentation.

puffanddmx commented 4 months ago

can you let me know how to convert this into Q5_K_M GGUF format ? BAAI/Bunny-Llama-3-8B-V this version.

The way how to convert this ? i want to create myself to create gguf based on it.

there is no doc for converting that vision into gguf

Isaachhh commented 4 months ago

https://github.com/BAAI-DCAI/Bunny/blob/main/script/conversion_to_GGUF.md

Isaachhh commented 4 months ago

@puffanddmx

Just download Bunny-Llama-3-8B-V-gguf and quantize the gguf files using llama.cpp.

e.g. ./quantize Bunny-Llama-3-8B-V-gguf/ggml-model-f16.gguf Bunny-Llama-3-8B-V-gguf/ggml-model-Q5_K_M.gguf Q5_K_M