mistralai / mistral-finetune

Apache License 2.0
2.45k stars 164 forks source link

How can I merge the LoRA weights into the base model? #74

Open pantDevesh opened 3 weeks ago

pantDevesh commented 3 weeks ago

Is there a script for this?

mkserge commented 3 weeks ago

You can do something like this

from mistral_inference.model import Transformer
model = Transformer.from_folder(args.model_path, device=f"cuda:0")
model.load_lora("/path/to/lora.safetensors", device=f"cuda:0")
safetensors.torch.save_model(model, "/path/to/merged.safetensors")
forest520 commented 3 weeks ago

How to perform inference with a LoRA model using Python code, if save_adapters = True?