microsoft / Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2
Other
1.89k stars 344 forks source link

[Finetune] enable converting checkpoints without optimizer state generation #424

Closed billishyahao closed 2 days ago

billishyahao commented 3 months ago

Our LLaMa finetune showcase consists of two parts: 1) Conversion from huggingface checkpoint to megatron deepspeed compatible checkpoint. 2) Do finetune with supervised dataset.

Latest version of checkpoint conversion generates both model weights and optimizer state and LR scheduler. It occupies huge disk storage. This patch aims to reduce those states and re-generate them during finetune program starts. Take LLaMa 7B as an example:

# du -sh llama-7b-mega-ds-T2P2.*
13G     llama-7b-mega-ds-T2P2.with-patch
38G     llama-7b-mega-ds-T2P2.without-patch

Use alpaca dataset to run finetune task and see loss has good convergence:

image