unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.37k stars 1.28k forks source link

Fix/export mistral #1281

Closed Erland366 closed 1 week ago

Erland366 commented 1 week ago

Fix this issue by adding environment variables at the start of importing unsloth.

Tried adding it inside the save_gguf but it doesn't work .-.

image

Where did I find the solution -> https://github.com/protocolbuffers/protobuf/issues/3002#issuecomment-325459597

Will evaluate on other model first, then will open the PR

Need to create separate PR for testing on Kaggle since even saving_to_gguf doesn't work on Kaggle because of limited space (we haven't moved that function to /tmp)

Erland366 commented 1 week ago

image confirmed mistral working (gguf converstion take ages!)

Erland366 commented 1 week ago

Here's example colab to try : https://colab.research.google.com/drive/1Ac3rwXoNYGeS8xnBri4k6oapyeuH7ui0?usp=sharing

image

unsloth/Llama-3.2-1B-Instruct-bnb-4bit confirmed working

danielhanchen commented 1 week ago

@Erland366 I think you added a function to move it to /tmp in Kaggle I think - is there a way to force it to use /tmp for all Kaggle machines?