guidance-ai / guidance

A guidance language for controlling large language models.
MIT License
18.69k stars 1.03k forks source link

LlamaCpp always using gpt2 tokeniser #978

Open prnvbn opened 1 month ago

prnvbn commented 1 month ago

The bug When using models.LlamaCpp the selected tokenizer is always gpt2 (This can be seen in the outut when verbose=True arg is set). I have pasted the dumped KV metadat keys

llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from REDACTED (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Llama 3.1 8B
llama_model_loader: - kv   3:                           general.basename str              = llama-3.1
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                          llama.block_count u32              = 32
llama_model_loader: - kv   6:                       llama.context_length u32              = 131072
llama_model_loader: - kv   7:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   8:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   9:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  10:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  13:                          general.file_type u32              = 1
llama_model_loader: - kv  14:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  15:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = smaug-bpe
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  21:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  22:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  23:               general.quantization_version u32              = 2

Is there something else that is required to properly set the tokenizer? Note, that I am using locally downloaded LLama 3.1 8B GGUF weights

To Reproduce Give a full working code snippet that can be pasted into a notebook cell or python file. Make sure to include the LLM load step so we know which model you are using.

from guidance import models

llama3 = models.LlamaCpp(
    model_path,
    n_gpu_layers=NUM_LAYERS_13B,
    n_batch=512,
    n_ctx=N_CONTEXT,
    echo=False,
    temperature=0.5,
    verbose=True,  # set to True to see if GPU off loading is happening properly
    llama_cpp_kwargs={
        tokenizer: tokenizer,
    }
)

llama3 + f'Do you want a joke or a poem? ' + gen(stop='.')

System info (please complete the following information):

prnvbn commented 1 month ago

seems related to this issue - https://github.com/guidance-ai/guidance/issues/869