h-zhao1997 / cobra

Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference
MIT License
259 stars 8 forks source link

Ask for finetune/LORA code #14

Open GuodongQi opened 6 months ago

GuodongQi commented 6 months ago

Thanks for the nice work! Do the authors plan to release codes for finetune or lora?

h-zhao1997 commented 5 months ago

For fine-tuning from a pretrained weight, all you need to do is add --pretrained_checkpoint to your script.

We currently have no plans to incorporate LORA, but it should be relatively straightforward to add LORA training to the code using Hugging Face's PEFT library.