abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
7.85k stars 939 forks source link

Tokenisers not round-tripping a string #1531

Open riedgar-ms opened 3 months ago

riedgar-ms commented 3 months ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Expected Behavior

I am trying to send a string through a string->tokens->string cycle, but I'm not getting the same string back as when I started.

Current Behavior

The round trip is putting an extra space on the start of the final string.

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

Core-i7 13800

Win 11

Python 3.12.3 Latest llama-cpp-python installed via pip

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

import pytest

from huggingface_hub import hf_hub_download
from typing import Any

class TestLlamaCppTokenizers:
    LLAMACPP_MODELS = [
        dict(
            gguf="TheBloke/Llama-2-7B-GGUF:llama-2-7b.Q5_K_M.gguf",
            kwargs={"verbose": True, "n_ctx": 4096},
        ),
        dict(
            gguf="microsoft/Phi-3-mini-4k-instruct-gguf:Phi-3-mini-4k-instruct-q4.gguf",
            kwargs={"verbose": True, "n_ctx": 4096},
        ),
    ]

    def get_tokenizer(self, model_info: dict[str, Any]):
        import llama_cpp

        repo_id, gguf_file = model_info["gguf"].split(":")
        downloaded_file = hf_hub_download(repo_id=repo_id, filename=gguf_file)
        lm = llama_cpp.Llama(model_path=downloaded_file, logits_all=True, **model_info["kwargs"])
        my_tok = lm.tokenizer()
        return my_tok

    @pytest.mark.parametrize("model_info", LLAMACPP_MODELS)
    def test_smoke(self, model_info: dict[str, Any]):
        my_tok = self.get_tokenizer(model_info)
        assert my_tok is not None

    @pytest.mark.parametrize("model_info", LLAMACPP_MODELS)
    @pytest.mark.parametrize("target_string", ["hello", "’"])
    def test_string_roundtrip(self, model_info: dict[str, Any], target_string: str):
        my_tok = my_tok = self.get_tokenizer(model_info)

        encoded = my_tok.tokenize(target_string.encode(), add_bos=False, special=False)
        decoded = my_tok.detokenize(encoded)
        final_string = decoded.decode()

        assert final_string == target_string

Failure Logs

Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.

From pytest I'm seeing failures from the final assert:

FAILED tests/test_tokenizers.py::TestLlamaCppTokenizers::test_string_roundtrip[hello-model_info0] - AssertionError: assert ' hello' == 'hello'
FAILED tests/test_tokenizers.py::TestLlamaCppTokenizers::test_string_roundtrip[hello-model_info1] - AssertionError: assert ' hello' == 'hello'
FAILED tests/test_tokenizers.py::TestLlamaCppTokenizers::test_string_roundtrip[\u2019-model_info0] - AssertionError: assert ' ’' == '’'
FAILED tests/test_tokenizers.py::TestLlamaCppTokenizers::test_string_roundtrip[\u2019-model_info1] - AssertionError: assert ' ’' == '’'
CISC commented 3 months ago

This is normal for Llama tokenizers, I don't know about Phi-3, it depends on the add_prefix_space metadata.