unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.42k stars 1.29k forks source link

Unable to run saving GGUF F16, KeyError: '"name"'. #1104

Open ramzyizza opened 1 month ago

ramzyizza commented 1 month ago

Unsloth: You have 1 CPUs. Using safe_serialization is 10x slower. We shall switch to Pytorch saving, which will take 3 minutes and not 30 minutes. To force safe_serialization, set it to None instead. Unsloth: Kaggle/Colab has limited disk space. We need to delete the downloaded model which will save 4-16GB of disk space, allowing you to save on Kaggle/Colab. Unsloth: Will remove a cached repo with size 2.2G Unsloth: Merging 4bit and LoRA weights to 16bit... Unsloth: Will use up to 5.8 out of 12.67 RAM for saving. 100%|██████████| 28/28 [00:01<00:00, 14.24it/s] Unsloth: Saving tokenizer... Done. Unsloth: Saving model... This might take 5 minutes for Llama-7b... Unsloth: Saving pct-classfier-fine-tuned/pytorch_model-00001-of-00002.bin... Unsloth: Saving pct-classfier-fine-tuned/pytorchmodel-00002-of-00002.bin... Done. Unsloth: Converting llama model. Can use fast conversion = False. ==((====))== Unsloth: Conversion from QLoRA to GGUF information \ /| [0] Installing llama.cpp will take 3 minutes. O^O/ \/ \ [1] Converting HF to GGUF 16bits will take 3 minutes. \ / [2] Converting GGUF 16bits to ['f16'] will take 10 minutes each. "-____-" In total, you will have to wait at least 16 minutes.

Unsloth: [0] Installing llama.cpp. This will take 3 minutes... Unsloth: [1] Converting model at pct-classfier-fine-tuned into f16 GGUF format. The output location will be ./pct-classfier-fine-tuned/unsloth.F16.gguf This will take 3 minutes... INFO:hf-to-gguf:Loading model: pct-classfier-fine-tuned INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only INFO:hf-to-gguf:Exporting model... INFO:hf-to-gguf:rope_freqs.weight, torch.float32 --> F32, shape = {64} INFO:hf-to-gguf:gguf: loading model weight map from 'pytorch_model.bin.index.json' INFO:hf-to-gguf:gguf: loading model part 'pytorch_model-00001-of-00002.bin' INFO:hf-to-gguf:token_embd.weight, torch.float16 --> F16, shape = {3072, 128256} INFO:hf-to-gguf:blk.0.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.0.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.0.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.0.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.0.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.0.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.1.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.1.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.1.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.1.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.1.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.1.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.1.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.1.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.2.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.2.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.2.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.2.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.2.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.2.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.2.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.2.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.3.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.3.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.3.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.3.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.3.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.3.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.3.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.3.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.4.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.4.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.4.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.4.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.4.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.4.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.4.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.5.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.5.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.5.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.5.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.5.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.5.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.5.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.5.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.6.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.6.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.6.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.6.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.6.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.6.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.6.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.6.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.7.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.7.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.7.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.7.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.7.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.7.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.7.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.7.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.8.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.8.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.8.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.8.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.8.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.8.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.8.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.8.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.9.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.9.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.9.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.9.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.9.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.9.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.9.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.9.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.10.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.10.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.10.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.10.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.10.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.10.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.10.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.10.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.11.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.11.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.11.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.11.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.11.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.11.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.11.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.11.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.12.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.12.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.12.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.12.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.12.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.12.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.12.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.13.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.13.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.13.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.13.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.13.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.13.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.13.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.13.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.14.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.14.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.14.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.14.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.14.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.14.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.14.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.14.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.15.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.15.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.15.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.15.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.15.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.15.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.15.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.15.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.16.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.16.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.16.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.16.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.16.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.16.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.16.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.16.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.17.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.17.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.17.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.17.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.17.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.17.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.17.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.17.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.18.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.18.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.18.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.18.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.18.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.18.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.18.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.18.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.19.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.19.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.19.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.19.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.19.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.19.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.19.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.19.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.20.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.20.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.20.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.20.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.20.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.20.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:gguf: loading model part 'pytorch_model-00002-of-00002.bin' INFO:hf-to-gguf:blk.20.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.20.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.21.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.21.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.21.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.21.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.21.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.21.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.21.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.21.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.22.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.22.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.22.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.22.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.22.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.22.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.22.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.22.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.23.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.23.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.23.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.23.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.23.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.23.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.23.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.23.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.24.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.24.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.24.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.24.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.24.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.24.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.24.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.24.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.24.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.25.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.25.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.25.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.25.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.25.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.25.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.25.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.25.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.25.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.26.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.26.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.26.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.26.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.26.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.26.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.26.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.26.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.26.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.27.attn_q.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.27.attn_k.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.27.attn_v.weight, torch.float16 --> F16, shape = {3072, 1024} INFO:hf-to-gguf:blk.27.attn_output.weight, torch.float16 --> F16, shape = {3072, 3072} INFO:hf-to-gguf:blk.27.ffn_gate.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.27.ffn_up.weight, torch.float16 --> F16, shape = {3072, 8192} INFO:hf-to-gguf:blk.27.ffn_down.weight, torch.float16 --> F16, shape = {8192, 3072} INFO:hf-to-gguf:blk.27.attn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:blk.27.ffn_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:output_norm.weight, torch.float16 --> F32, shape = {3072} INFO:hf-to-gguf:Set meta model INFO:hf-to-gguf:Set model parameters INFO:hf-to-gguf:gguf: context length = 131072 INFO:hf-to-gguf:gguf: embedding length = 3072 INFO:hf-to-gguf:gguf: feed forward length = 8192 INFO:hf-to-gguf:gguf: head count = 24 INFO:hf-to-gguf:gguf: key-value head count = 8 INFO:hf-to-gguf:gguf: rope theta = 500000.0 INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05 INFO:hf-to-gguf:gguf: file type = 1 INFO:hf-to-gguf:Set model tokenizer INFO:gguf.vocab:Adding 280147 merge(s). INFO:gguf.vocab:Setting special token type bos to 128000 INFO:gguf.vocab:Setting special token type eos to 128009 INFO:gguf.vocab:Setting special token type pad to 128004 INFO:gguf.vocab:Setting chat_template to {{- bos_token }} {%- if custom_tools is defined %} {%- set tools = custom_tools %} {%- endif %} {%- if not tools_in_user_message is defined %} {%- set tools_in_user_message = true %} {%- endif %} {%- if not date_string is defined %} {%- set date_string = "26 July 2024" %} {%- endif %} {%- if not tools is defined %} {%- set tools = none %} {%- endif %}

{#- This block extracts the system message, so we can slot it into the right place. #} {%- if messages[0]['role'] == 'system' %} {%- set system_message = messages[0]['content'] %} {%- set messages = messages[1:] %} {%- else %} {%- set system_message = "" %} {%- endif %}

{#- System message + builtin tools #} {{- "<|start_header_id|>system<|end_header_id|>

" }} {%- if builtin_tools is defined or tools is not none %} {{- "Environment: ipython " }} {%- endif %} {%- if builtin_tools is defined %} {{- "Tools: " + builtin_tools | reject('equalto', 'code_interpreter') | join(", ") + "

"}} {%- endif %} {{- "Cutting Knowledge Date: December 2023 " }} {{- "Today Date: " + date_string + "

" }} {%- if tools is not none and not tools_in_user_message %} {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }} {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }} {{- "Do not use variables.

" }} {%- for t in tools %} {{- t | tojson(indent=4) }} {{- "

" }} {%- endfor %} {%- endif %} {{- system_message }} {{- "<|eot_id|>" }}

{#- Custom tools are passed in a user message with some extra guidance #} {%- if tools_in_user_message and not tools is none %} {#- Extract the first user message so we can plug it in here #} {%- if messages | length != 0 %} {%- set first_user_message = messages[0]['content'] %} {%- set messages = messages[1:] %} {%- else %} {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }} {%- endif %} {{- '<|start_header_id|>user<|end_header_id|>

' -}} {{- "Given the following functions, please respond with a JSON for a function call " }} {{- "with its proper arguments that best answers the given prompt.

" }} {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }} {{- "Do not use variables.

" }} {%- for t in tools %} {{- t | tojson(indent=4) }} {{- "

" }} {%- endfor %} {{- first_user_message + "<|eot_id|>"}} {%- endif %}

{%- for message in messages %} {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %} {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>

'+ message['content'] + '<|eot_id|>' }} {%- elif 'tool_calls' in message %} {%- if not message.tool_calls|length == 1 %} {{- raise_exception("This model only supports single tool-calls at once!") }} {%- endif %} {%- set tool_call = message.tool_calls[0].function %} {%- if builtin_tools is defined and tool_call.name in builtin_tools %} {{- '<|start_header_id|>assistant<|end_header_id|>

' -}} {{- "<|python_tag|>" + tool_call.name + ".call(" }} {%- for arg_name, arg_val in tool_call.arguments | items %} {{- arg_name + '="' + arg_val + '"' }} {%- if not loop.last %} {{- ", " }} {%- endif %} {%- endfor %} {{- ")" }} {%- else %} {{- '<|start_header_id|>assistant<|end_header_id|>

' -}} {{- '{"name": "' + tool_call.name + '", ' }} {{- '"parameters": ' }} {{- tool_call.arguments | tojson }} {{- "}" }} {%- endif %} {%- if builtin_tools is defined %} {#- This means we're in ipython mode #} {{- "<|eom_id|>" }} {%- else %} {{- "<|eot_id|>" }} {%- endif %} {%- elif message.role == "tool" or message.role == "ipython" %} {{- "<|start_header_id|>ipython<|end_header_id|>

" }} {%- if message.content is mapping or message.content is iterable %} {{- message.content | tojson }} {%- else %} {{- message.content }} {%- endif %} {{- "<|eot_id|>" }} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|start_header_id|>assistant<|end_header_id|>

' }} {%- endif %}

INFO:hf-to-gguf:Set model quantization version INFO:gguf.gguf_writer:Writing the following files: INFO:gguf.gguf_writer:pct-classfier-fine-tuned/unsloth.F16.gguf: n_tensors = 255, total_size = 6.4G Writing: 100%|██████████| 6.43G/6.43G [01:19<00:00, 80.9Mbyte/s] INFO:hf-to-gguf:Model successfully exported to pct-classfier-fine-tuned/unsloth.F16.gguf Unsloth: Conversion completed! Output location: ./pct-classfier-fine-tuned/unsloth.F16.gguf

KeyError Traceback (most recent call last) in <cell line: 8>() 6 7 # Save to 16bit GGUF ----> 8 if True: model.save_pretrained_gguf("pct-classfier-fine-tuned", tokenizer, quantization_method = "f16") 9 if False: model.push_to_hub_gguf("hf/model", tokenizer, quantization_method = "f16", token = "") 10

1 frames /usr/local/lib/python3.10/dist-packages/unsloth/save.py in unsloth_save_pretrained_gguf(self, save_directory, tokenizer, quantization_method, first_conversion, push_to_hub, token, private, is_main_process, state_dict, save_function, max_shard_size, safe_serialization, variant, save_peft_format, tags, temporary_location, maximum_memory_usage) 1642 1643 # Save Ollama modelfile -> 1644 modelfile = create_ollama_modelfile(tokenizer, all_file_locations[0]) 1645 modelfile_location = None 1646 if modelfile is not None:

/usr/local/lib/python3.10/dist-packages/unsloth/save.py in create_ollama_modelfile(tokenizer, gguf_location) 1492 ) 1493 else: -> 1494 modelfile = modelfile.format( 1495 __FILE_LOCATION__ = gguf_location, 1496 )

KeyError: '"name"'

danielhanchen commented 1 month ago

@ramzyizza Apologies on the delay - you're correct there is an issue - will fix this asap! Sorry on the issue!

Tarun-032 commented 1 month ago

facing the same error hope it is fixed soon

vietanhdev commented 1 month ago

Facing the same issue here.

vietanhdev commented 1 month ago

In the mean time, you can convert with this tool: https://huggingface.co/spaces/ggml-org/gguf-my-repo.

danielhanchen commented 1 month ago

@vietanhdev @Tarun-032 @ramzyizza Extreme apologies on the delay - just fixed this! If you're on Colab or Kaggle, delete and disconnect then restart). If you installed Unsloth on a local machine, please update it via:

pip uninstall unsloth -y
pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"

Sorry on the delay!