Closed gakugaku closed 3 months ago
Hi, that's not a bug. convert-hf-to-gguf.py
automatically detects which vocab should be used based on model. There's no need for --vocab-type
anymore.
I'll fix the readme.
Hi, that's not a bug.
convert-hf-to-gguf.py
automatically detects which vocab should be used based on model. There's no need for--vocab-type
anymore.I'll fix the readme.
Struggling with the same issue, trying to convert the minicpm-2.5 model which is dynamically generated (like llava-surgery) during the pre-conversion process. Forcing the tokenizer/model type should be an option available
In the same process I also got an error to trust remote code during get_vocab_base - from_pretrained()
@cmp-nct Yup, convert-hf-to-gguf-update.py
is a bit of a pain for models with new tokenizers. I replied in more details in #7599, but you can still use examples/convert-legacy-llama.py
which is old convert.py
script. We should add some sort of option for this though. I agree. I'll take a look at that tomorrow.
What happened?
README:
https://github.com/ggerganov/llama.cpp/blob/f578b86b2123d0f92afbaa98a031df4d4464e582/README.md?plain=1#L625-L626
Actual Output:
Name and Version
$ ./llama-cli --version version: 3143 (f578b86b) built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for x86_64-linux-gnu
What operating system are you seeing the problem on?
Linux
Relevant log output
No response