tloen / alpaca-lora

Instruct-tune LLaMA on consumer hardware
Apache License 2.0
18.59k stars 2.22k forks source link

Are the saved models (either adapter_model.bin or pytorch_model.bin) only 25-26MB in size? #601

Open LAB-703 opened 11 months ago

LAB-703 commented 11 months ago

Is it correct?

LAB-703 commented 11 months ago

plus, anybody know how to convert the saved models to safetensor to upload huggingface?

minju0307 commented 11 months ago

I have same questions and also I wnat to know how to convert safetensor to huggingface foramt( e.g. from_pretrained)

minju0307 commented 11 months ago

I solved installed transformers==4.33.3. the newest transformer version save model default with safetensors, but I think it is not stable for LoRA. with transformers=4.33.3, it saved *.bin model

Twelve-or-12 commented 8 months ago

I solved installed transformers==4.33.3. the newest transformer version save model default with safetensors, but I think it is not stable for LoRA. with transformers=4.33.3, it saved *.bin model

Can you explain more about your setup? I install transformers==4.33.3, and fine-tune CodeLlama-7b with custom dataset, it still outputs safetensors format. By the way, do you know how to inference with the fine-tuned model. I change the --lora_weights direct to local ./lora-alpaca, it occurs error: safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization

XiaoXiaoYi123 commented 7 months ago

trainer.save_pretrained(output_dir) will be worked