ggerganov / llama.cpp

LLM inference in C/C++
MIT License
66.57k stars 9.57k forks source link

Deepseek-based model throws `std::out_of_range` exception on load #5688

Closed brittlewis12 closed 8 months ago

brittlewis12 commented 8 months ago

Model: OpenCodeInterpreter-DS-6.7B (GGUFs)

This is a deepseek coder instruct-based model, llama arch, but maybe there's something distinct for it that requires special-handling?

Or maybe I did something wrong in converting these files from the original safetensors (used the same build, b2249, for converting, quantizing, and running).

Both -ngl=999 & -ngl=0 produce the same exception:

libc++abi: terminating due to uncaught exception of type std::out_of_range: unordered_map::at: key not found

llama.cpp build info

lldb stacktrace

Process 25487 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
    frame #0: 0x0000000188223330 libc++abi.dylib`__cxa_throw
libc++abi.dylib`__cxa_throw:
->  0x188223330 <+0>:  pacibsp
    0x188223334 <+4>:  stp    x22, x21, [sp, #-0x30]!
    0x188223338 <+8>:  stp    x20, x19, [sp, #0x10]
    0x18822333c <+12>: stp    x29, x30, [sp, #0x20]
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
  * frame #0: 0x0000000188223330 libc++abi.dylib`__cxa_throw
    frame #1: 0x00000001000684c0 main`std::__1::__throw_out_of_range[abi:v160006](char const*) + 60
    frame #2: 0x000000010006a790 main`llama_byte_to_token(llama_vocab const&, unsigned char) + 472
    frame #3: 0x000000010003d270 main`llama_model_load(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, llama_model&, llama_model_params&) + 1968
    frame #4: 0x000000010003ca08 main`llama_load_model_from_file + 420
    frame #5: 0x00000001000a208c main`llama_init_from_gpt_params(gpt_params&) + 96
    frame #6: 0x00000001000ed73c main`main + 2404
    frame #7: 0x0000000187ee90e0 dyld`start + 2360
full lldb output from `./main`: ``` (lldb) target create "./main" Current executable set to '/Users/tito/code/llama.cpp/main' (arm64). (lldb) settings set -- target.run-args "-m" "/Users/tito/code/autogguf/OpenCodeInterpreter-DS-6.7B/opencodeinterpreter-ds-6.7b.Q4_K_M.gguf" "-t" "7" "--color" "--ctx_size" "4096" "--keep" "4" "--in-prefix" "<|User|>\\n" "--in-suffix" "\\n<|Assistant|>\\n" "-r" "<|User|>" "-r" "<|Assistant|>" "-r" "<|EOT|>" "-ins" "-b" "512" "-n" "-1" "--temp" "0.7" "--repeat_penalty" "1.1" "-ngl" "0" (lldb) breakpoint set -E C++ Breakpoint 1: no locations (pending). (lldb) run Process 25487 launched: '/Users/tito/code/llama.cpp/main' (arm64) 2 locations added to breakpoint 1 Log start main: build = 2249 (15499eb9) main: built with Apple clang version 15.0.0 (clang-1500.1.0.2.5) for arm64-apple-darwin23.3.0 main: seed = 1708707124 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /Users/tito/code/autogguf/OpenCodeInterpreter-DS-6.7B/opencodeinterpreter-ds-6.7b.Q4_K_M.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = . llama_model_loader: - kv 2: llama.context_length u32 = 16384 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 100000.000000 llama_model_loader: - kv 11: llama.rope.scaling.type str = linear llama_model_loader: - kv 12: llama.rope.scaling.factor f32 = 4.000000 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = llama llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32256] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 32013 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 32021 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 32014 llama_model_loader: - kv 21: tokenizer.chat_template str = {%- set found_item = false -%}\n{%- fo... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors Process 25487 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x0000000188223330 libc++abi.dylib`__cxa_throw libc++abi.dylib`__cxa_throw: -> 0x188223330 <+0>: pacibsp 0x188223334 <+4>: stp x22, x21, [sp, #-0x30]! 0x188223338 <+8>: stp x20, x19, [sp, #0x10] 0x18822333c <+12>: stp x29, x30, [sp, #0x20] (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 * frame #0: 0x0000000188223330 libc++abi.dylib`__cxa_throw frame #1: 0x00000001000684c0 main`std::__1::__throw_out_of_range[abi:v160006](char const*) + 60 frame #2: 0x000000010006a790 main`llama_byte_to_token(llama_vocab const&, unsigned char) + 472 frame #3: 0x000000010003d270 main`llama_model_load(std::__1::basic_string, std::__1::allocator> const&, llama_model&, llama_model_params&) + 1968 frame #4: 0x000000010003ca08 main`llama_load_model_from_file + 420 frame #5: 0x00000001000a208c main`llama_init_from_gpt_params(gpt_params&) + 96 frame #6: 0x00000001000ed73c main`main + 2404 frame #7: 0x0000000187ee90e0 dyld`start + 2360 ```

conversion info

$ python3.11 ./convert.py OpenCodeInterpreter-DS-6.7B \
  --outtype f16 \
  --outfile opencodeinterpreter-ds-6.7b.fp16.gguf \
  --vocab-type hfft \
  --pad-vocab
brittlewis12 commented 8 months ago

whoops, it was indeed my mistake in the conversion!

turns out that, while the base instruct model uses a fast tokenizer, this model instead uses the regular llama tokenizer. which means I should've converted with BPE!

reconverted & quantized and what do you know, it runs great.


I doubt it's worth investigating the crash on its own given the incorrectly produced model file.

But maybe there could be a way to detect this sort of mistake at conversion time to short circuit this process? Auto vocab-type detection would be beneficial, but that's out of the scope of this issue.

ggerganov commented 8 months ago

reconverted & quantized and what do you know, it runs great.

Huh, that's surprising. There is a long pending PR that I thought needs to be merged to support DeepSeek models: #5464. It should fix some tokenization problems AFAICT + add conversion

I'm surprised that it worked for you

brittlewis12 commented 8 months ago

the updated fp16 conversion and quants just finished uploading: hf link

it does seem to work fine tho! I haven't tested it too extensively, but:

full output running main: ``` ❯ ./opencodeinterp.sh Log start main: build = 2249 (15499eb9) main: built with Apple clang version 15.0.0 (clang-1500.1.0.2.5) for arm64-apple-darwin23.3.0 main: seed = 1708713048 llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /Users/tito/code/autogguf/OpenCodeInterpreter-DS-6.7B/opencodeinterpreter-ds-6.7b.Q4_K_M.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = . llama_model_loader: - kv 2: llama.context_length u32 = 16384 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 100000.000000 llama_model_loader: - kv 11: llama.rope.scaling.type str = linear llama_model_loader: - kv 12: llama.rope.scaling.factor f32 = 4.000000 llama_model_loader: - kv 13: general.file_type u32 = 15 llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32256] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,31757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 32013 llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 32021 llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32014 llama_model_loader: - kv 22: tokenizer.chat_template str = {%- set found_item = false -%}\n{%- fo... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_K: 193 tensors llama_model_loader: - type q6_K: 33 tensors llm_load_vocab: mismatch in special tokens definition ( 243/32256 vs 256/32256 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 32256 llm_load_print_meta: n_merges = 31757 llm_load_print_meta: n_ctx_train = 16384 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 100000.0 llm_load_print_meta: freq_scale_train = 0.25 llm_load_print_meta: n_yarn_orig_ctx = 16384 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_K - Medium llm_load_print_meta: model params = 6.74 B llm_load_print_meta: model size = 3.80 GiB (4.84 BPW) llm_load_print_meta: general.name = . llm_load_print_meta: BOS token = 32013 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 32021 '<|EOT|>' llm_load_print_meta: PAD token = 32014 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 126 'Ä' llm_load_tensors: ggml ctx size = 0.22 MiB ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 3821.77 MiB, ( 3821.83 / 21845.34) llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: Metal buffer size = 3821.76 MiB llm_load_tensors: CPU buffer size = 70.88 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 4096 llama_new_context_with_model: freq_base = 100000.0 llama_new_context_with_model: freq_scale = 0.25 ggml_metal_init: allocating ggml_metal_init: found device: Apple M1 Pro ggml_metal_init: picking default device: Apple M1 Pro ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: loading '/Users/tito/code/llama.cpp/ggml-metal.metal' ggml_metal_init: GPU name: Apple M1 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 22906.50 MB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 2048.00 MiB, ( 5871.64 / 21845.34) llama_kv_cache_init: Metal KV buffer size = 2048.00 MiB llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB llama_new_context_with_model: CPU input buffer size = 17.04 MiB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 296.03 MiB, ( 6167.67 / 21845.34) llama_new_context_with_model: Metal compute buffer size = 296.02 MiB llama_new_context_with_model: CPU compute buffer size = 8.00 MiB llama_new_context_with_model: graph splits (measure): 3 system_info: n_threads = 7 / 10 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | main: interactive mode on. Reverse prompt: '<|User|>' Reverse prompt: '<|Assistant|>' Reverse prompt: '<|EOT|>' Reverse prompt: '### Instruction: ' Input prefix: '<|User|>\n' Input suffix: '\n<|Assistant|>\n' sampling: repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000 top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.700 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampling order: CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature generate: n_ctx = 4096, n_batch = 512, n_predict = -1, n_keep = 1 == Running in interactive mode. == - Press Ctrl+C to interject at any time. - Press Return to return control to LLaMa. - To return control without starting a new line, end your input with '/'. - If you want to submit another line, end your input with '\'. > <|User|>\nwrite fibonacci with a cache hashmap in ruby\n<|Assistant|>\ndef fibonacci(n, cache = {}) if n == 0 || n == 1 return n end unless cache[n] cache[n] = fibonacci(n - 1, cache) + fibonacci(n - 2, cache) end cache[n] end # Example usage: puts fibonacci(5) # Output: 3 This function uses a technique called memoization to store the result of expensive function calls and reusing them when same inputs occur again. This greatly improves performance for recursive algorithms like this one, by reducing redundant calculations. The cache hashmap serves as our "memory" in which we store previously calculated results of fibonacci(n - 1) and fibonacci(n - 2). The base case is if n equals to 0 or 1, return n itself because the first two numbers in Fibonacci sequence are 0 and 1. If not, it checks whether the result of this function call has been calculated before. If yes, it returns that result directly from the cache hashmap; otherwise, it calculates it by calling fibonacci(n - 1) + fibonacci(n - 2), stores it in the cache hashmap for future use and then return it. > <|User|>\n llama_print_timings: load time = 5624.60 ms llama_print_timings: sample time = 188.29 ms / 305 runs ( 0.62 ms per token, 1619.81 tokens per second) llama_print_timings: prompt eval time = 499.83 ms / 36 tokens ( 13.88 ms per token, 72.02 tokens per second) llama_print_timings: eval time = 12152.07 ms / 305 runs ( 39.84 ms per token, 25.10 tokens per second) llama_print_timings: total time = 187992.14 ms / 341 tokens ```

that script just calls main with in-prefix/-suffix, ngl, temp, etc.