huggingface / alignment-handbook

Robust recipes to align language models with human and AI preferences
https://huggingface.co/HuggingFaceH4
Apache License 2.0
4.53k stars 393 forks source link

FSDP + QDoRA Support #159

Open iseesaw opened 5 months ago

iseesaw commented 5 months ago

Hi the team, great work!

QDoRA seems to be better than QLoRA, refer to Efficient finetuning of Llama 3 with FSDP QDoRA

I wonder whether there will be demo / example about FSDP + QDoRA during finetuning?

Thanks!

MustafaAlahmid commented 5 months ago

I have done some FSDP to train full parameters mistral 7b

maybe its useful for you

here

iseesaw commented 5 months ago

I have done some FSDP to train full parameters mistral 7b

maybe its useful for you

here

Thanks, good job!

I want to finetune Llama-3-70B with 8 A6000 48G, which are not enough for training full parameters.

FSDP + QDoRA is the method I have found to be feasible and probably the most effective.

MustafaAlahmid commented 5 months ago

I have done some FSDP to train full parameters mistral 7b maybe its useful for you here

Thanks, good job!

I want to finetune Llama-3-70B with 8 A6000 48G, which are not enough for training full parameters.

FSDP + QDoRA is the method I have found to be feasible and probably the most effective.

yes it should work try to change the config file for FSDP and put llama decoder layer should be something like this

ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/fsdp.yaml scripts/run_sft.py recipes/{modelname}/sft/config_q;ora.yaml

iseesaw commented 5 months ago

yes it should work try to change the config file for FSDP and put llama decoder layer should be something like this

ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/fsdp.yaml scripts/run_sft.py recipes/{modelname}/sft/config_q;ora.yaml

I've tried this command and encountered the issue described in https://github.com/huggingface/peft/issues/1674

Currently, I am following the official example provided in PEFT for further troubleshooting: https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh

freegheist commented 5 months ago

FSDP + QDoRA for Zephyr 141b would be really good

deep-diver commented 4 months ago

AFAIK, FSDP+QDoRA is not supported feature in HF official releases like transformers, peft, ...