ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.63k stars 9.26k forks source link

instruct models don't work with latest llama cppBug: #8463

Closed joshknnd1982 closed 1 week ago

joshknnd1982 commented 1 month ago

What happened?

When I try to use an instruct model with latest llama cpp, model does not load. Also the instruct flag is nowhere to be found.

Name and Version

b3384

What operating system are you seeing the problem on?

No response

Relevant log output

No response

dspasyuk commented 1 month ago

@joshknnd1982 Download model: https://huggingface.co/dspasyuk/Meta-Llama-3-8B-Instruct-Q5_K_S-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Q5_K_S.gguf?download=true Try this: ../llama.cpp/llama-cli --model ./models/meta-llama-3-8b-instruct-q5_k_s.gguf --n-gpu-layers 25 -cnv --simple-io -b 2048 --ctx_size 0 --temp 0 --top_k 10 --multiline-input --chat-template llama3 --log-disable -p 'Role and Purpose: You are Alice, a large language model. Your purpose is to assist users by providing information, answering questions, and engaging in meaningful conversations based on the data you were trained on. Behavior and Tone: Be informative, engaging, and respectful. Maintain a neutral and unbiased tone. Ensure that responses are clear and concise. Capabilities: Use your training data to provide accurate and relevant information. Explain complex concepts in an easy-to-understand manner.'

joshknnd1982 commented 1 month ago

Can you adapt that for my windows bat file? title llama.cpp :start llama-cli -i -if -t 11 -mli -n -2 --temp 0.5 --dynatemp-range 0.5 --repeat-penalty 1.3 --mirostat 2 --log-disable -m Meta-Llama-3-8B-Instruct.Q8_0.gguf pause goto start

dspasyuk commented 1 month ago

@joshknnd1982 remove -i -if and add -cnv --chat-template llama3 be careful with high --repeat-penalty 1.3 the model may stop responding to you.
so: llama-cli -cnv --chat-template llama3 -t 11 -mli -n -2 --temp 0.5 --dynatemp-range 0.5 --repeat-penalty 1.3 --mirostat 2 --log-disable -m Meta-Llama-3-8B-Instruct.Q8_0.gguf

joshknnd1982 commented 1 month ago

I am attaching a log because another model is not working. main.log

dspasyuk commented 1 month ago

I cannot reproduce your issue, here is how I run it:

./llama.cpp/llama-cli --model ../llama.cpp/llama-cli --model /home/denis/Downloads/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-Q6_K.gguf --n-gpu-layers 10 -cnv --simple-io -b 2048 --ctx_size 0 --temp 0 --top_k 10 --multiline-input --chat-template llama3 -p 'You are Alice, a large language model.' --mirostat 2 Log start main: build = 3305 (d7fd29ff) main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: seed = 1720833999 llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from /home/denis/Downloads/Meta-Llama-3-8B-Instruct-Dolfin-v0.1-Q6_K.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = D:\AI llama_model_loader: - kv 2: llama.vocab_size u32 = 128256 llama_model_loader: - kv 3: llama.context_length u32 = 8192 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.block_count u32 = 32 llama_model_loader: - kv 6: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 8: llama.attention.head_count u32 = 32 llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 10: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 12: general.file_type u32 = 18 llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 15: tokenizer.ggml.scores arr[f32,128256] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128001 llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 128001 llama_model_loader: - kv 21: tokenizer.chat_template str = {% set loop_messages = messages %}{% ... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q6_K: 226 tensors llm_load_vocab: missing pre-tokenizer type, using: 'default' llm_load_vocab:
llm_load_vocab: ****
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ****
llm_load_vocab:
llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.8000 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 8192 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 8192 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q6_K llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 6.14 GiB (6.56 BPW) llm_load_print_meta: general.name = D:\AI llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128001 '<|end_of_text|>' llm_load_print_meta: PAD token = 128001 '<|end_of_text|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 10 repeating layers to GPU llm_load_tensors: offloaded 10/33 layers to GPU llm_load_tensors: CPU buffer size = 6282.97 MiB llm_load_tensors: CUDA0 buffer size = 1706.56 MiB ......................................................................................... llama_new_context_with_model: n_ctx = 8192 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 704.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 320.00 MiB llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.49 MiB llama_new_context_with_model: CUDA0 compute buffer size = 677.48 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 246 main: chat template example: <|start_header_id|>system<|end_header_id|>

You are a helpful assistant<|eot_id|><|start_header_id|>user<|end_header_id|>

Hello<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Hi there<|eot_id|><|start_header_id|>user<|end_header_id|>

How are you?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

system_info: n_threads = 8 / 16 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | main: interactive mode on. sampling: repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 top_k = 10, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.000 mirostat = 2, mirostat_lr = 0.100, mirostat_ent = 5.000 sampling order: CFG -> Penalties -> mirostat generate: n_ctx = 8192, n_batch = 2048, n_predict = -1, n_keep = 0

== Running in interactive mode. ==

system

You are Alice, a large language model.

Hello\ Hello! I'm Alice, a large language model designed to assist and communicate with users. How can I help you today?

github-actions[bot] commented 1 week ago

This issue was closed because it has been inactive for 14 days since being marked as stale.