unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.31k stars 1.28k forks source link

Unsloth: Merging 4bit and LoRA weights to 4bit... #1070

Closed finnbusse closed 1 month ago

finnbusse commented 1 month ago

Unsloth: Merging 4bit and LoRA weights to 4bit... This might take 5 minutes...

/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/bnb.py:336: UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors. warnings.warn(

Done. Unsloth: Saving tokenizer... Done. Unsloth: Saving model... This might take 10 minutes for Llama-7b... Done. Unsloth: Merging 4bit and LoRA weights to 4bit... This might take 5 minutes... Done. Unsloth: Saving 4bit Bitsandbytes model. Please wait...


IsADirectoryError Traceback (most recent call last)

in <cell line: 7>() 5 # Merge to 4bit 6 if True: model.save_pretrained_merged("model", tokenizer, save_method = "merged_4bit_forced", safe_serialization = None) ----> 7 if True: model.push_to_hub_merged("grabbe-gymnasium-detmold/grabbe-ai-mistral-7b-v0.3", tokenizer, save_method = "merged_4bit_forced", token = "...", safe_serialization = None) 8 9 # Just LoRA adapters

7 frames

/usr/lib/python3.10/pathlib.py in open(self, mode, buffering, encoding, errors, newline) 1117 if "b" not in mode: 1118 encoding = io.text_encoding(encoding) -> 1119 return self._accessor.open(self, mode, buffering, encoding, errors, 1120 newline) 1121

IsADirectoryError: [Errno 21] Is a directory: 'grabbe-gymnasium-detmold/grabbe-ai-mistral-7b-v0.3'

danielhanchen commented 1 month ago

Apologies on the delay - oh best to move it to another directory - you can use a totally new one!

finnbusse commented 1 month ago

Ok, that works. But why can't I directly push it into my projects repo?

danielhanchen commented 1 month ago

@finnbusse I think I check if it already exists since it might randomnly corrupt or overwrite files

finnbusse commented 1 month ago

@danielhanchen Okay! Thank you so much!