Closed arnavgarg1 closed 10 months ago
6 files ±0 6 suites ±0 14m 19s :stopwatch: +9s 12 tests ±0 9 :heavy_check_mark: ±0 3 :zzz: ±0 0 :x: ±0 60 runs ±0 42 :heavy_check_mark: ±0 18 :zzz: ±0 0 :x: ±0
Results for commit 224bebf7. ± Comparison against base commit 3203cc16.
This revert addresses a critical issue introduced in the previous pull request (#3735), where the simplification of pad token configuration inadvertently led to the PAD token being mapped to the same token ID as the UNK (unknown) token. This mapping anomaly resulted in quality degradation during fine-tuning.
The problem surfaced as the model, instead of learning to predict an EOS (end-of-sequence) token to indicate stopping at the end of a sequence, learned to predict an UNK token at the end of sequences. This hindered the model's ability to recognize when to halt during generation, impacting the overall performance and quality of the fine-tuned model.
This reversion aims to restore the previous pad token setup and rectify the unintended mapping issue, ensuring that the model correctly learns to predict EOS tokens for proper sequence termination during fine-tuning.
Demonstration of the bug that was introduced using Llama-2
Current:
The issue here is that we're mapping the new PAD token to the same token ID as the UNK token, which is the last token that's passed into the model's forward pass.
This is what used to happen before (and what this revert will go back to)