OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
GNU General Public License v3.0
5.73k stars 374 forks source link

Finetuning Of V2? #55

Open poonehmousavi opened 1 year ago

poonehmousavi commented 1 year ago

Hi, Thanks for the amazing code. I wonder when you plan to release the code for fine-tuning the V2? Also, do you plan to add Falcon fine-tuning? Thanks

gaopengpjlab commented 1 year ago

Single turn finetuning code is here : https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/alpaca_finetuning_v1 Multi turn finetuning code is here : https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b Multimodal pretrainig/finetuning/inference code is here : https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/imagebind_LLM Falcon Finetuning code is here : https://github.com/Lightning-AI/lit-parrot/

poonehmousavi commented 1 year ago

Thanks for your prompt response. I want to fine-tune Llama 13B model. But I have checked the Multi turn finetuning code, and could not find a fine-tuning recipe. Single-turn fine-tuning is only offered to fine-tune the 7B model. Do you have any instruction on how to fine-tune 13B or biggermodel?

yxchng commented 1 year ago

@gaopengpjlab any updates on finetuning larger model?