sail-sg / sdft

[ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
https://aclanthology.org/2024.acl-long.58
99 stars 4 forks source link

Inconsistency in Precision Between Dataset Generation and Training for gsm8k in SDFT Script #13

Open dineshkh opened 1 month ago

dineshkh commented 1 month ago

In the gsm8k script (link), the distilled dataset is generated using fp16 precision, while the model is trained on this dataset using bf16.

Shouldn't the precision format be consistent throughout the process?

Generate distilled dataset using fp16: [Line 35](https://github.com/sail-sg/sdft/blob/bfb6c255fccdce7459235c20f19a3b9817a9cd5d/scripts/gsm8k/sdft.sh#L35)
Train on distilled dataset using bf16: [Line 61](https://github.com/sail-sg/sdft/blob/bfb6c255fccdce7459235c20f19a3b9817a9cd5d/scripts/gsm8k/sdft.sh#L61)
rickyang1114 commented 1 month ago

Thanks for your interest! In an initial experiment, training with fp16 resulted in instabilities. Consequently, we adopted bf16 for training while continuing to use fp16 for inference. This approach has not led to any significant issues to date.