Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.68k stars 170 forks source link

Config of Two-Stage Training of Multi-Modal LLaMA2 #61

Closed CaraJ7 closed 1 year ago

CaraJ7 commented 1 year ago

Hi. I would like to train the stage2 based on your released stage1 model. I am following the instruction from https://llama2-accessory.readthedocs.io/en/latest/finetune/mm.html#stage2.

However, I could not find the corresponding config of stage2 model. Specifically, the llama_config defined in accessory/exps/finetune/mm/alpacaLlava_llamaQformerv2_13B.sh. May you kindly provide it? Thank you.

Enderfga commented 1 year ago

https://huggingface.co/Alpha-VLLM/LLaMA2-Accessory/blob/main/config/13B_params.json You can download by entering "python tools/download.py --down_config --model_size 13B" directly in our warehouse. The config used here is params.json corresponding to the original llama2 13b. If you have used the peft model, you need to add configs/model/finetune/sg/llamaPeft_normBiasLora.json to the warehouse.

CaraJ7 commented 1 year ago

Thanks for your answer!