Open iseesaw opened 5 months ago
I have done some FSDP to train full parameters mistral 7b
maybe its useful for you
Thanks, good job!
I want to finetune Llama-3-70B with 8 A6000 48G, which are not enough for training full parameters.
FSDP + QDoRA is the method I have found to be feasible and probably the most effective.
I have done some FSDP to train full parameters mistral 7b maybe its useful for you here
Thanks, good job!
I want to finetune Llama-3-70B with 8 A6000 48G, which are not enough for training full parameters.
FSDP + QDoRA is the method I have found to be feasible and probably the most effective.
yes it should work try to change the config file for FSDP and put llama decoder layer should be something like this
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/fsdp.yaml scripts/run_sft.py recipes/{modelname}/sft/config_q;ora.yaml
yes it should work try to change the config file for FSDP and put llama decoder layer should be something like this
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/fsdp.yaml scripts/run_sft.py recipes/{modelname}/sft/config_q;ora.yaml
I've tried this command and encountered the issue described in https://github.com/huggingface/peft/issues/1674
Currently, I am following the official example provided in PEFT for further troubleshooting: https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh
FSDP + QDoRA for Zephyr 141b would be really good
AFAIK, FSDP+QDoRA is not supported feature in HF official releases like transformers, peft, ...
Hi the team, great work!
QDoRA seems to be better than QLoRA, refer to Efficient finetuning of Llama 3 with FSDP QDoRA
I wonder whether there will be demo / example about FSDP + QDoRA during finetuning?
Thanks!