HomebrewNLP / Olmax

HomebrewNLP in JAX flavour for maintable TPU-Training
BSD 2-Clause "Simplified" License
46 stars 6 forks source link

Alternative Losses #54

Open ClashLuke opened 2 years ago

ClashLuke commented 2 years ago

Currently, we're using only the softmax classification/cross-entropy loss to create a language-modeling loss for next-token prediction. However, other works such as T-Few showed that adding alternative losses for external benefits such as length explicit penalties during training can help downstream-task performance. Additionally, other works like DCL and InfoLOOB demonstrated that changing the fundamental structure of the loss from softmax classification to something different can help speed up convergence. That's why a similar approach could be beneficial for us.\ In this issue, we'll explore whether InfoLOOB's classification loss for the language-modeling objective helps or if we should change the entire objective.

ClashLuke commented 2 years ago

PolyLoss vs CrossEntropy

ClashLuke commented 2 years ago

PolyLoss (green) performs quite a bit worse than CrossEntropy (bisque): grafik

We could still try InfoLOOB (DCL) as it appeared promising before: grafik However, after reaching a loss of -100, InfoLOOB ran into NaN, which halted the training. Nothing like this has happened with CrossEntropy, which is why CrossEntropy achieved a better final model even though the initial convergence was slower.