h-zhao1997 / cobra

Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference
MIT License
219 stars 7 forks source link

Ask for finetune/LORA code #14

Open GuodongQi opened 1 month ago

GuodongQi commented 1 month ago

Thanks for the nice work! Do the authors plan to release codes for finetune or lora?

h-zhao1997 commented 1 month ago

For fine-tuning from a pretrained weight, all you need to do is add --pretrained_checkpoint to your script.

We currently have no plans to incorporate LORA, but it should be relatively straightforward to add LORA training to the code using Hugging Face's PEFT library.