Our LLaMa finetune showcase consists of two parts:
1) Conversion from huggingface checkpoint to megatron deepspeed compatible checkpoint.
2) Do finetune with supervised dataset.
Latest version of checkpoint conversion generates both model weights and optimizer state and LR scheduler. It occupies huge disk storage.
This patch aims to reduce those states and re-generate them during finetune program starts. Take LLaMa 7B as an example:
# du -sh llama-7b-mega-ds-T2P2.*
13G llama-7b-mega-ds-T2P2.with-patch
38G llama-7b-mega-ds-T2P2.without-patch
Use alpaca dataset to run finetune task and see loss has good convergence:
Our LLaMa finetune showcase consists of two parts: 1) Conversion from huggingface checkpoint to megatron deepspeed compatible checkpoint. 2) Do finetune with supervised dataset.
Latest version of checkpoint conversion generates both model weights and optimizer state and LR scheduler. It occupies huge disk storage. This patch aims to reduce those states and re-generate them during finetune program starts. Take LLaMa 7B as an example:
Use alpaca dataset to run finetune task and see loss has good convergence: