The latest llama.cpp produces bad outputs for CodeShell, which previously performed well when merged into llama.cpp.
After updating convert-hf-to-gguf.py and convert-hf-to-gguf-update.py, I have converted the CodeShell-7b, a ckpt working well with an old version(5d55b0cd827bb0fcfedfa329a82bd5d6ef2c93ca) to gguf. But running inference with it on the latest version produces poor outputs.
Tested command:
I am also experiencing bad output for DS-coder-v2-lite.
a bunch of "!!!!!!!!!!!"
edit: I will create a separate issue with more details
but for now I can confirm that I am getting good output from commit#21be9ca
What happened?
The latest llama.cpp produces bad outputs for CodeShell, which previously performed well when merged into llama.cpp. After updating
convert-hf-to-gguf.py
andconvert-hf-to-gguf-update.py
, I have converted the CodeShell-7b, a ckpt working well with an old version(5d55b0cd827bb0fcfedfa329a82bd5d6ef2c93ca) to gguf. But running inference with it on the latest version produces poor outputs. Tested command:Name and Version
version: 3281 (023b8807) built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
What operating system are you seeing the problem on?
Linux
Relevant log output