abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
8.05k stars 958 forks source link

model.tokenize result is different from tranformers tokenizer result #1468

Open Liufeiran123 opened 5 months ago

Liufeiran123 commented 5 months ago

model.tokenize result is different from tranformers tokenizer result

using the same mode Qwen1.5-7B input_ids1 = model.tokenize(prompt.encode("utf-8"))

input_ids2 = tokenizer([prompt], padding=False)["input_ids"]

input_ids1 != input_ids2

tombolano commented 5 months ago

It seems that the problem with the Qwen1.5-7B model was actually a problem in llama.cpp that has already been fixed (https://github.com/ggerganov/llama.cpp/issues/7384), so i guess that this problem may be closed.