OpenBMB / llama.cpp

Port of Facebook's LLaMA model in C/C++
MIT License
68 stars 12 forks source link

sync master #8

Closed tc-mb closed 4 months ago

tc-mb commented 4 months ago

Add optional MLP bias for ARCH_LLAMA to support Granite models. Partially addresses ggerganov/llama.cpp/issues/7116 Still needs some more changes to properly support Granite.

propagate the add_space_prefix configuration from the HF model configuration to the gguf file and honor it with the gpt2 tokenizer.

it works only for the small models 3b and 8b.

The convert-hf-to-gguf.py script uses the vocabulary size of the granite models to detect granite and set the correct configuration.