utilityai / llama-cpp-rs

121 stars 41 forks source link

Match Llama.cpp default sampling ? #161

Closed oddpxl closed 5 months ago

oddpxl commented 5 months ago

I'd like to automate a few tests to make sure a model works - ( with llama.cpp as a baseline )

Currently I can't seem to match Llama.cpp's answer... ( llama-cpp-rs answers incorrectly )

..trying the llama-cpp-rs example OR my modified version ( see below )

--

..as a reference - Oobabooga using same model get the correct answer.

( not the exactly the same - but logically correct - like llama.cpp )

--

I presume this is down to llama-cpp-rs not yet having the same sample chain ?

( we don't seem to have CFG - maybe I'm using sample greedy / sample stages / something else the wrong way )

--

That said...

Question... Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?

Llama-cpp-rs answer... ..close but incorrect

Let's compare the cost of each type of berry:

1. Blueberries cost more than strawberries.
2. Blueberries cost less than raspberries.

From the first statement, we know that blueberries are more expensive than strawberries. From the second statement, we know that blueberries are cheaper than raspberries.

To determine if the third statement, "Raspberries cost more than strawberries and blueberries," is true, we need to compare the cost of raspberries to both strawberries and blueberries.

Since blueberries are cheaper than raspberries, but more expensive than strawberries, and we don't have enough information to compare the cost of raspberries to strawberries directly, we cannot definitively say whether the third statement is true or false based on the given information.

---> Therefore, the answer is: Insufficient information to determine.

Llama.cpp answer... ...correct

Let's compare the prices of each type of berry:
1. Blueberries cost more than strawberries.
2. Blueberries cost less than raspberries.

To determine if the third statement "Raspberries cost more than strawberries and blueberries" is true, we need to compare the price of raspberries with both strawberries and blueberries:

1. Raspberries cost more than strawberries: This is not stated directly in the given information, but it can be inferred from statement 1 (blueberries cost less than raspberries, and blueberries cost more than strawberries).
2. Raspberries cost more than blueberries: This is stated directly in the second statement.

Therefore, based on the given information, 

---> the third statement "Raspberries cost more than strawberries and blueberries" is true. [end of text]

The model TheBloke/Mistral-7B-Instruct-v0.2-GGUF --> mistral-7b-instruct-v0.2.Q4_K_S.gguf

Default Llama.cpp sample order... CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature

Sample settings... repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000 top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000

The code... ( please forgive my Rust - only rusted for two months... )

   let model = init_model()?;
    let backend = LlamaBackend::init()?;
    let ctx_params = init_context()?;
    run_prompt("Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?", &model, &backend, &ctx_params)?;

Calling the following...

//! This is a translation of simple.cpp in llama.cpp using llama-cpp-2 -- with additional sample stages
#![allow(
    clippy::cast_possible_wrap,
    clippy::cast_possible_truncation,
    clippy::cast_precision_loss,
    clippy::cast_sign_loss
)]

use anyhow::{/* anyhow,*/ bail, Context, Result};
use llama_cpp_2::context::params::LlamaContextParams;
use llama_cpp_2::ggml_time_us;
use llama_cpp_2::llama_backend::LlamaBackend;
use llama_cpp_2::llama_batch::LlamaBatch;
use llama_cpp_2::model::params::LlamaModelParams;
use llama_cpp_2::model::AddBos;
use llama_cpp_2::model::LlamaModel;
use llama_cpp_2::token::data_array::LlamaTokenDataArray;

use llama_cpp_2::token::LlamaToken;

use std::io::Write;
use std::num::NonZeroU32;
use std::time::Duration;

pub fn init_model() -> Result<LlamaModel> {
    let backend = LlamaBackend::init()?;
    let model_params = LlamaModelParams::default()
        .with_n_gpu_layers(33)
        .with_use_mlock(false);
        //.with_use_mlock(true);

    let model_path = std::env::current_exe()
        .expect("Failed to get current executable path")
        .parent()
        .expect("Failed to get executable directory")
        .read_dir()
        .expect("Failed to read directory contents")
        .filter_map(|entry| entry.ok())
        .find(|entry| entry.path().extension().and_then(std::ffi::OsStr::to_str) == Some("gguf"))
        .expect("No .gguf file found in the current directory")
        .path();

    let model = LlamaModel::load_from_file(&backend, &model_path, &model_params)
        .with_context(|| "unable to load model")?;

    Ok(model)
}

pub fn init_context() -> Result<LlamaContextParams> {
    let ctx_params = LlamaContextParams::default()
        .with_n_ctx(NonZeroU32::new(2048))
        .with_seed(1234);

    Ok(ctx_params)
}

pub fn run_prompt(prompt: &str, model: &LlamaModel, backend: &LlamaBackend, ctx_params: &LlamaContextParams) -> Result<()> {
    let n_len = 512;

    let mut ctx = model
        .new_context(backend, ctx_params.clone())
        .with_context(|| "unable to create the llama_context")?;

    let tokens_list = model
        .str_to_token(prompt, AddBos::Always)
        .with_context(|| format!("failed to tokenize {prompt}"))?;

    let n_cxt = ctx.n_ctx() as i32;
    let n_kv_req = tokens_list.len() as i32 + (n_len - tokens_list.len() as i32);

    eprintln!("n_len = {n_len}, n_ctx = {n_cxt}, k_kv_req = {n_kv_req}");

    if n_kv_req > n_cxt {
        bail!(
            "n_kv_req > n_ctx, the required kv cache size is not big enough
either reduce n_len or increase n_ctx"
        )
    }

    if tokens_list.len() >= usize::try_from(n_len)? {
        bail!("the prompt is too long, it has more tokens than n_len")
    }

    // print the prompt token-by-token
    eprintln!();

    for token in &tokens_list {
        eprint!("{}", model.token_to_str(*token)?);
    }

    std::io::stderr().flush()?;

    // create a llama_batch with size 512
    // we use this object to submit token data for decoding
    let mut batch = LlamaBatch::new(512, 1);

    let last_index: i32 = (tokens_list.len() - 1) as i32;
    for (i, token) in (0_i32..).zip(tokens_list.into_iter()) {
        // llama_decode will output logits only for the last token of the prompt
        let is_last = i == last_index;
        batch.add(token, i, &[0], is_last)?;
    }

    ctx.decode(&mut batch)
        .with_context(|| "llama_decode() failed")?;

    // main loop

    let mut n_cur = batch.n_tokens();
    let mut n_decode = 0;

    let t_main_start = ggml_time_us();

    while n_cur <= n_len {
        let candidates = ctx.candidates_ith(batch.n_tokens() - 1);

        let mut candidates_p = LlamaTokenDataArray::from_iter(candidates, false);

            // Llama.cpp default sample order...
            // CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
            // --------------------------------------------------------------------------------
            // Sample settings... 
            //repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
            // top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100
            //mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000

            //CFG seems we don't have it ?? ( only in llama.cpp )

            // Penalties
            let history = vec![
                LlamaToken::new(2),
                LlamaToken::new(1),
                LlamaToken::new(0),
                ];

            ctx.sample_repetition_penalty(&mut candidates_p, &history, 64, 1.1,
                0.0, 0.0);

            ctx.sample_top_k(&mut candidates_p, 40, 1); 

            ctx.sample_tail_free(&mut candidates_p, 1.0, 1); 

            ctx.sample_typical(&mut candidates_p, 1.0, 1);

            ctx.sample_top_p(&mut candidates_p, 0.950, 1);

            ctx.sample_min_p(&mut candidates_p, 0.05, 1);

            ctx.sample_temp(&mut candidates_p, 0.1);

            let new_token_id = ctx.sample_token_greedy(candidates_p);

        if new_token_id == model.token_eos() {
            eprintln!();
            break;
        }

        print!("{}", model.token_to_str(new_token_id)?);
        std::io::stdout().flush()?;

        batch.clear();
        batch.add(new_token_id, n_cur, &[0], true)?;

        n_cur += 1;

        ctx.decode(&mut batch).with_context(|| "failed to eval")?;

        n_decode += 1;
    }

    eprintln!("\n");

    let t_main_end = ggml_time_us();

    let duration = Duration::from_micros((t_main_end - t_main_start) as u64);

    eprintln!(
        "decoded {} tokens in {:.2} s, speed {:.2} t/s\n",
        n_decode,
        duration.as_secs_f32(),
        n_decode as f32 / duration.as_secs_f32()
    );

    println!("{}", ctx.timings());

    Ok(())
}

Llama.cpp full log

./main -p "Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?" -m mistral-7b-instruct-v0.2.Q4_K_S.gguf -n 512 -ngl 33 --threads 8 --temp 0.1

Log start main: build = 2409 (306d34be) main: built with Apple clang version 15.0.0 (clang-1500.1.0.2.5) for arm64-apple-darwin23.3.0 main: seed = 1710284006 llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /Users/odd/Documents/odd_LLM_rust/llama-cpp-rs-mod-odd/target/release/mistral-7b-instruct-v0.2.Q4_K_S.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-instruct-v0.2 llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 14 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 217 tensors llama_model_loader: - type q5_K: 8 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attm = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_K - Small llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.86 GiB (4.57 BPW) llm_load_print_meta: general.name = mistralai_mistral-7b-instruct-v0.2 llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: PAD token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 3877.58 MiB, ( 3877.64 / 49152.00) llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: Metal buffer size = 3877.57 MiB llm_load_tensors: CPU buffer size = 70.31 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Max ggml_metal_init: picking default device: Apple M1 Max ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/odd/Documents/odd_LLM_rust/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M1 Max ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 64.00 MiB, ( 3943.45 / 49152.00) llama_kv_cache_init: Metal KV buffer size = 64.00 MiB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_new_context_with_model: CPU input buffer size = 10.01 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 73.02 MiB, ( 4016.47 / 49152.00) llama_new_context_with_model: Metal compute buffer size = 73.00 MiB llama_new_context_with_model: CPU compute buffer size = 8.00 MiB llama_new_context_with_model: graph splits (measure): 2

system_info: n_threads = 8 / 10 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | sampling: repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000 top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampling order: CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 1

Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?

Let's compare the prices of each type of berry:

  1. Blueberries cost more than strawberries.
  2. Blueberries cost less than raspberries.

To determine if the third statement "Raspberries cost more than strawberries and blueberries" is true, we need to compare the price of raspberries with both strawberries and blueberries:

  1. Raspberries cost more than strawberries: This is not stated directly in the given information, but it can be inferred from statement 1 (blueberries cost less than raspberries, and blueberries cost more than strawberries).
  2. Raspberries cost more than blueberries: This is stated directly in the second statement.

Therefore, based on the given information, the third statement "Raspberries cost more than strawberries and blueberries" is true. [end of text]

llama_print_timings: load time = 252.89 ms llama_print_timings: sample time = 15.60 ms / 184 runs ( 0.08 ms per token, 11797.14 tokens per second) llama_print_timings: prompt eval time = 164.76 ms / 43 tokens ( 3.83 ms per token, 260.98 tokens per second) llama_print_timings: eval time = 3579.66 ms / 183 runs ( 19.56 ms per token, 51.12 tokens per second) llama_print_timings: total time = 3782.13 ms / 226 tokens ggml_metal_free: deallocating Log end


LLAMA-CPP-RS - original example - full log

./llama-cpp-rs --n-len 512 "Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?" local mistral-7b-instruct-v0.2.Q4_K_S.gguf

llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /Users/odd/Documents/odd_LLM_rust/llama-cpp-rs-odd/target/release/mistral-7b-instruct-v0.2.Q4_K_S.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-instruct-v0.2 llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 14 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 217 tensors llama_model_loader: - type q5_K: 8 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_K - Small llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.86 GiB (4.57 BPW) llm_load_print_meta: general.name = mistralai_mistral-7b-instruct-v0.2 llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: PAD token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 3877.58 MiB, ( 3877.64 / 49152.00) llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: Metal buffer size = 3877.57 MiB llm_load_tensors: CPU buffer size = 70.31 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Max ggml_metal_init: picking default device: Apple M1 Max ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd ggml_metal_init: loading 'ggml-metal.metal' ggml_metal_init: GPU name: Apple M1 Max ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 256.00 MiB, ( 4135.45 / 49152.00) llama_kv_cache_init: Metal KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU input buffer size = 13.02 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 164.02 MiB, ( 4299.47 / 49152.00) llama_new_context_with_model: Metal compute buffer size = 164.00 MiB llama_new_context_with_model: CPU compute buffer size = 8.00 MiB llama_new_context_with_model: graph splits (measure): 2 n_len = 512, n_ctx = 2048, k_kv_req = 512

Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?

Let's compare the cost of each type of berry:

  1. Blueberries cost more than strawberries.
  2. Blueberries cost less than raspberries.

From the first statement, we know that blueberries are more expensive than strawberries. From the second statement, we know that blueberries are cheaper than raspberries.

To determine if the third statement, "Raspberries cost more than strawberries and blueberries," is true, we need to compare the cost of raspberries to both strawberries and blueberries.

Since blueberries are cheaper than raspberries, but more expensive than strawberries, and we don't have enough information to compare the cost of raspberries to strawberries directly, we cannot definitively say whether the third statement is true or false based on the given information.

decoded 177 tokens in 3.46 s, speed 51.15 t/s

load time = 350.73 ms sample time = 20.21 ms / 178 runs (0.11 ms per token, 8805.34 tokens per second) prompt eval time = 291.05 ms / 43 tokens (6.77 ms per token, 147.74 tokens per second) eval time = 3437.89 ms / 177 runs (19.42 ms per token, 51.49 tokens per second) total time = 3810.63 ms ggml_metal_free: deallocating


LLAMA-CPP-RS modified example full log

llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /Users/odd/Documents/odd_LLM_rust/llama-cpp-rs-mod-odd/target/release/mistral-7b-instruct-v0.2.Q4_K_S.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-instruct-v0.2 llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 14 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 217 tensors llama_model_loader: - type q5_K: 8 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_K - Small llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 3.86 GiB (4.57 BPW) llm_load_print_meta: general.name = mistralai_mistral-7b-instruct-v0.2 llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: PAD token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.22 MiB ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 3877.58 MiB, ( 3877.64 / 49152.00) llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: Metal buffer size = 3877.57 MiB llm_load_tensors: CPU buffer size = 70.31 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Max ggml_metal_init: picking default device: Apple M1 Max ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd ggml_metal_init: loading 'ggml-metal.metal' ggml_metal_init: GPU name: Apple M1 Max ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 256.00 MiB, ( 4135.45 / 49152.00) llama_kv_cache_init: Metal KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: CPU input buffer size = 13.02 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 164.02 MiB, ( 4299.47 / 49152.00) llama_new_context_with_model: Metal compute buffer size = 164.00 MiB llama_new_context_with_model: CPU compute buffer size = 8.00 MiB llama_new_context_with_model: graph splits (measure): 2 n_len = 512, n_ctx = 2048, k_kv_req = 512

Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?

Let's compare the cost of each type of berry:

  1. Blueberries cost more than strawberries.
  2. Blueberries cost less than raspberries.

From the first statement, we know that blueberries are more expensive than strawberries. From the second statement, we know that blueberries are cheaper than raspberries.

To determine if the third statement, "Raspberries cost more than strawberries and blueberries," is true, we need to compare the cost of raspberries to both strawberries and blueberries.

Since blueberries are cheaper than raspberries, but more expensive than strawberries, and we don't have enough information to compare the cost of raspberries to strawberries directly, we cannot definitively say whether the third statement is true or false based on the given information.

Therefore, the answer is: Insufficient information to determine.

decoded 192 tokens in 3.74 s, speed 51.36 t/s

load time = 379.33 ms sample time = 14.58 ms / 193 runs (0.08 ms per token, 13238.22 tokens per second) prompt eval time = 293.73 ms / 43 tokens (6.83 ms per token, 146.39 tokens per second) eval time = 3720.89 ms / 192 runs (19.38 ms per token, 51.60 tokens per second) total time = 4116.71 ms ggml_metal_free: deallocating

MarcusDunn commented 5 months ago

Make sure your history is the history of the tokens produced, nothing else jumps out to me as egregious. Also ensure that the seed are the same across both tests. (These answers are close enough it may be just be luck), I can add CFG if neither of those "fix" it and see if that changes anything.

MarcusDunn commented 5 months ago

ah, also you are using sample_token_greedy - instead use sample_token_softmax and take the 0th token (for "greedy" sampling) or sample_token (not sure if there's a binding yet)

I need to document the behavoir better - but greedy sampling uses the probabilities and not the logits (which is what ever other sampling function modifies).

Here's the tail end of our sampling code.

self.context.sample_token_softmax(&mut candidates);
candidates.data[0].id()
oddpxl commented 5 months ago

Thanks for your patience Marcus - much appreciated.

Alright I'm completely out of my depth here lol but I now get llama-cpp-rs to answer correctly - even when varying the seed.

( see changes below )

..fun fact: varying the seed for llama.cpp it "mostly" gets the answer right...but a few times it failed... :) Oh well :)

In general ( obviously without any scientific testing ) llama-cpp-rs seems more stable with the added sampling steps.

Found this... ..time to study --> https://github.com/ggerganov/llama.cpp/tree/master/examples/main#top-k-sampling

--

Added history and softmax... ..hopefully got it right

        let candidates = ctx.candidates_ith(batch.n_tokens() - 1);
        let mut candidates_p = LlamaTokenDataArray::from_iter(candidates, false);

        // Apply repetition penalty
        // Note: Matched settings to what llama.cpp claim to use as default
        candidates_p.sample_repetition_penalty(None, &history, 64, 1.1, 0.0, 0.0);

        ctx.sample_top_k(&mut candidates_p, 40, 1); 

        ctx.sample_tail_free(&mut candidates_p, 1.0, 1); 

        ctx.sample_typical(&mut candidates_p, 1.0, 1);

        ctx.sample_top_p(&mut candidates_p, 0.950, 1);

        ctx.sample_min_p(&mut candidates_p , 0.05, 1);

        ctx.sample_temp(&mut candidates_p, 0.1);

        ctx.sample_token_softmax(&mut candidates_p);
        let new_token_id = candidates_p.data[0].id();

        // Add the new token to the history for repetition penalty in future iterations
        history.push(new_token_id);
chenhunghan commented 5 months ago

@oddpxl would you mind share the code where you place history? Outside of the loop? And was it also with vec![LlamaToken::new(2), LlamaToken::new(1), LlamaToken::new(0)]? And how did you get llama.cpp to print these

system_info: n_threads = 8 / 10 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 1

trying to apply temp, top_k etc, but don't know where to get started, your help will be much appreciated!

oddpxl commented 5 months ago

@chenhunghan Hey - I'd be happy to... ( see below for the modified example at it's current state )

@MarcusDunn Perhaps add another example for anyone who would like to customise sampling ?

--

As a reference - using gemma-2b.gguf I get 105 t/s on a M1 max 64Gb - not too shabby ;)

( performance 1:1 par with llama.cpp )

--

Llama.cpp prints those stats if you clone the git, then compile, then run ./main ( in terminal )

On my Mac M1 Max that would be...

./main -p "Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?" -m llama-2-7b-chat.Q4_0.gguf -n 128 -ngl 33 --mlock --threads 8

Main.rs

mod llama_local;
use llama_local::*;
use llama_cpp_2::llama_backend::LlamaBackend;

fn main() -> Result<(), Box<dyn std::error::Error>> {

    println!("Hey lets go...");

    let model = init_model()?;
    let backend = LlamaBackend::init()?;
    let ctx_params = init_context()?;
    run_prompt("Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?", &model, &backend, &ctx_params)?;

    Ok(())
}

llama_local.rs

#![allow(
    clippy::cast_possible_wrap,
    clippy::cast_possible_truncation,
    clippy::cast_precision_loss,
    clippy::cast_sign_loss
)]

use anyhow::{/* anyhow,*/ bail, Context, Result};

use llama_cpp_2::context::params::LlamaContextParams;
use llama_cpp_2::ggml_time_us;
use llama_cpp_2::llama_backend::LlamaBackend;
use llama_cpp_2::llama_batch::LlamaBatch;
use llama_cpp_2::model::params::LlamaModelParams;
use llama_cpp_2::model::AddBos;
use llama_cpp_2::model::LlamaModel;
use llama_cpp_2::token::data_array::LlamaTokenDataArray;
//use llama_cpp_2::token::data::LlamaTokenData;
use llama_cpp_2::token::LlamaToken;

//use std::collections::BTreeMap;

use std::io::Write;
use std::num::NonZeroU32;
use std::time::Duration;

pub fn init_model() -> Result<LlamaModel> {
    let backend = LlamaBackend::init()?;
    let model_params = LlamaModelParams::default()
        .with_n_gpu_layers(33)
        .with_use_mlock(false);
        //.with_use_mlock(true);

    let model_path = std::env::current_exe()
        .expect("Failed to get current executable path")
        .parent()
        .expect("Failed to get executable directory")
        .read_dir()
        .expect("Failed to read directory contents")
        .filter_map(|entry| entry.ok())
        .find(|entry| entry.path().extension().and_then(std::ffi::OsStr::to_str) == Some("gguf"))
        .expect("No .gguf file found in the current directory")
        .path();

    let model = LlamaModel::load_from_file(&backend, &model_path, &model_params)
        .with_context(|| "unable to load model")?;

    Ok(model)
}

pub fn init_context() -> Result<LlamaContextParams> {
    let ctx_params = LlamaContextParams::default()
        .with_n_ctx(NonZeroU32::new(2048))
        .with_seed(1337);

    Ok(ctx_params)
}

pub fn run_prompt(prompt: &str, model: &LlamaModel, backend: &LlamaBackend, ctx_params: &LlamaContextParams) -> Result<()> {

    let n_len = 512;

    let mut ctx = model
        .new_context(backend, ctx_params.clone())
        .with_context(|| "unable to create the llama_context")?;

    let tokens_list = model
        .str_to_token(prompt, AddBos::Always)
        .with_context(|| format!("failed to tokenize {prompt}"))?;

    let n_cxt = ctx.n_ctx() as i32;
    let n_kv_req = tokens_list.len() as i32 + (n_len - tokens_list.len() as i32);

    eprintln!("n_len = {n_len}, n_ctx = {n_cxt}, k_kv_req = {n_kv_req}");

    if n_kv_req > n_cxt {
        bail!(
            "n_kv_req > n_ctx, the required kv cache size is not big enough
either reduce n_len or increase n_ctx"
        )
    }

    if tokens_list.len() >= usize::try_from(n_len)? {
        bail!("the prompt is too long, it has more tokens than n_len")
    }

    // print the prompt token-by-token
    eprintln!();

    for token in &tokens_list {
        eprint!("{}", model.token_to_str(*token)?);
    }

    std::io::stderr().flush()?;

    // create a llama_batch with size 512
    // we use this object to submit token data for decoding
    let mut batch = LlamaBatch::new(512, 1);

    let last_index: i32 = (tokens_list.len() - 1) as i32;
    for (i, token) in (0_i32..).zip(tokens_list.into_iter()) {
        // llama_decode will output logits only for the last token of the prompt
        let is_last = i == last_index;
        batch.add(token, i, &[0], is_last)?;
    }

    ctx.decode(&mut batch)
        .with_context(|| "llama_decode() failed")?;

    // main loop
    let mut n_cur = batch.n_tokens();
    let mut n_decode = 0;

    let t_main_start = ggml_time_us();

    let mut history: Vec<LlamaToken> = Vec::new();

    while n_cur <= n_len {

            // Llama.cpp default sample order...
            // CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
            // --------------------------------------------------------------------------------
            // Sample settings... 
            // repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
            // top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100
            // mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000

            //CFG seems we don't have it ?? ( only in llama.cpp )

        let candidates = ctx.candidates_ith(batch.n_tokens() - 1);
        let mut candidates_p = LlamaTokenDataArray::from_iter(candidates, false);

        //let new_token_id = ctx.sample_token_greedy(candidates_p);

        // Apply repetition penalty
        // Note: parameters according to llama.cpp default
        candidates_p.sample_repetition_penalty(None, &history, 64, 1.1, 
            0.0, 0.0);

        ctx.sample_top_k(&mut candidates_p, 40, 1); 

        ctx.sample_tail_free(&mut candidates_p, 1.0, 1); 

        ctx.sample_typical(&mut candidates_p, 1.0, 1);

        ctx.sample_top_p(&mut candidates_p, 0.950, 1);

        ctx.sample_min_p(&mut candidates_p , 0.05, 1);

        ctx.sample_temp(&mut candidates_p, 0.1);

        //ctx.sample_token_softmax(&mut candidates_p);
        let new_token_id = candidates_p.data[0].id();

        // Add the new token to the history for repetition penalty in future iterations
        history.push(new_token_id);

        if new_token_id == model.token_eos() {
            eprintln!();
            break;
        }

        print!("{}", model.token_to_str(new_token_id)?);
        std::io::stdout().flush()?;

        batch.clear();
        batch.add(new_token_id, n_cur, &[0], true)?;

        n_cur += 1;

        ctx.decode(&mut batch).with_context(|| "failed to eval")?;

        n_decode += 1;
    }

    eprintln!("\n");

    let t_main_end = ggml_time_us();

    let duration = Duration::from_micros((t_main_end - t_main_start) as u64);

    eprintln!(
        "decoded {} tokens in {:.2} s, speed {:.2} t/s\n",
        n_decode,
        duration.as_secs_f32(),
        n_decode as f32 / duration.as_secs_f32()
    );

    println!("{}", ctx.timings());

    Ok(())
}
MarcusDunn commented 5 months ago

I'll work on fixing up some of the sampling abstraction we use internally and moving it into here. Should make building sampling stacks a little less error prone.

chenhunghan commented 5 months ago

Thanks you! Would love to contribute an example (if you think applying sampling parameters is in the scope of this repo).

MarcusDunn commented 5 months ago

https://docs.rs/llama-cpp-2/latest/llama_cpp_2/context/sample/sampler/index.html

not 100% sure on correctness of example - but feel free to try to put it into simple