OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
GNU General Public License v3.0
5.63k stars 367 forks source link

Support for llama-2 70B #97

Open qizzzh opened 11 months ago

qizzzh commented 11 months ago

Would it work natively or we need to train new adapters?

gaopengpjlab commented 11 months ago

Our new repo support fullfinetune / PEFT / Quantized PEFT of LLaMa2-70B. Please check the following repo:

https://github.com/Alpha-VLLM/LLaMA2-Accessory

qizzzh commented 11 months ago

Nice. Do you have a fine-tuned llama-2 70B multi-modal model?

gaopengpjlab commented 11 months ago

Please check the model zoo of llama2-accessory. We released llama2-70B finetuned on sharegpt.

qizzzh commented 11 months ago

Sorry where is it? I didn’t find it in the repo.

gaopengpjlab commented 11 months ago

Please check this website:

https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/docs/finetune.md#multi-turn-instruction-tuning-of-llama2-7b-on-sharegpt

LutaoChu commented 11 months ago

Great work! Has the LLaMA Adapter V2 model based on LLaMA2 been released? Or release plan?

gaopengpjlab commented 11 months ago

@LutaoChu Please check the new repo. https://llama2-accessory.readthedocs.io/en/latest/finetune/sg_peft.html