ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.87k stars 9.3k forks source link

Can't fine tune my model using llama.cpp it stops from the first iteration !!! #6768

Closed walidbet18 closed 3 months ago

walidbet18 commented 4 months ago

main: seed: 1 main: model base = '/mnt/c/Users/walid.bettahar/virtualassistant/build/VirtualAssistant/dolphin-2.2.1-mistral-7b.Q2_K.gguf' llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /mnt/c/Users/walid.bettahar/virtualassistant/build/VirtualAssistant/dolphin-2.2.1-mistral-7b.Q2_K.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = ehartford_dolphin-2.2.1-mistral-7b llama_model_loader: - kv 2: llama.context_length u32 = 32768 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 32 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 10 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32002] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32002] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32002] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 19: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q2_K: 65 tensors llama_model_loader: - type q3_K: 160 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens definition check successful ( 261/32002 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32002 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q2_K - Medium llm_load_print_meta: model params = 7.24 B llm_load_print_meta: model size = 2.87 GiB (3.41 BPW) llm_load_print_meta: general.name = ehartford_dolphin-2.2.1-mistral-7b llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 32000 '<|im_end|>' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: PAD token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.11 MiB llm_load_tensors: CPU buffer size = 2939.58 MiB .................................................................................................. llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 64.00 MiB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_new_context_with_model: CPU input buffer size = 10.01 MiB llama_new_context_with_model: CPU compute buffer size = 72.00 MiB llama_new_context_with_model: graph splits (measure): 1 main: init model print_params: n_vocab : 32002 print_params: n_ctx : 64 print_params: n_embd : 4096 print_params: n_ff : 14336 print_params: n_head : 32 print_params: n_head_kv : 8 print_params: n_layer : 32 print_params: norm_rms_eps : 0.000010 print_params: rope_freq_base : 10000.000000 print_params: rope_freq_scale : 1.000000 print_lora_params: n_rank_attention_norm : 1 print_lora_params: n_rank_wq : 4 print_lora_params: n_rank_wk : 4 print_lora_params: n_rank_wv : 4 print_lora_params: n_rank_wo : 4 print_lora_params: n_rank_ffn_norm : 1 print_lora_params: n_rank_ffn_gate : 4 print_lora_params: n_rank_ffn_down : 4 print_lora_params: n_rank_ffn_up : 4 print_lora_params: n_rank_tok_embeddings : 4 print_lora_params: n_rank_norm : 1 print_lora_params: n_rank_output : 4 main: total train_iterations 0 main: seen train_samples 0 main: seen train_tokens 0 main: completed train_epochs 0 main: lora_size = 88796064 bytes (84.7 MB) main: opt_size = 132491440 bytes (126.4 MB) main: opt iter 0 main: input_size = 32771104 bytes (31.3 MB) main: compute_size = 32336630080 bytes (30838.6 MB) main: evaluation order = LEFT_TO_RIGHT main: tokenize training data from /mnt/c/Users/walid.bettahar/virtualassistant/texte_recupere.txt main: sample-start: main: include-sample-start: false tokenize_file: total number of samples: 744 main: number of training tokens: 808 main: number of unique tokens: 219 main: train data seems to have changed. restarting shuffled epoch. main: begin training main: work_size = 512272 bytes (0.5 MB) train_opt_callback: iter= 0 sample=1/744 sched=0.000000 loss=0.000000 |-> [1] + Done "/usr/bin/gdb" --interpreter=mi --tty=${DbgTerm} 0<"/tmp/Microsoft-MIEngine-In-xm0h5vpf.nr4" 1>"/tmp/Microsoft-MIEngine-Out-ascpxlur.kow"

{ "name": "Fine Tune", "type": "cppdbg", "request": "launch", "program": "${workspaceFolder}/build/VirtualAssistant/AI/llama.cpp/examples/finetune/finetune", "args": ["--model-base", "${workspaceFolder}/build/VirtualAssistant/dolphin-2.2.1-mistral-7b.Q2_K.gguf", "--ctx", "64", "--checkpoint-in", "chk-dolphin-2.2.1-mistral-7b.Q2_0-shakespeare-LATEST.gguf", "--checkpoint-out", "chk-dolphin-2.2.1-mistral-7b.Q2_0-shakespeare-ITERATION.gguf", "--lora-out", "dolphin-2.2.1-mistral-7b.Q2_0-shakespeare-ITERATION.bin", "--train-data", "${workspaceFolder}/texte_recupere.txt", "-t", "4", "-b", "4", "--seed", "1", "--adam-iter", "30", "--no-checkpointing", "--save-every", "10", ], "environment": [{ "name": "config", "value": "Debug" }], "cwd": "${workspaceFolder}" },

github-actions[bot] commented 3 months ago

This issue was closed because it has been inactive for 14 days since being marked as stale.