mixture the SPIN + DPO i.e union DPO pairs with SPIN pairs per iteration
also I'm wondering the difference between SFT and SPIN, high level imagination from the loss
SPIN not only wanna toward gt, but also wanna leave the mistakes made last checkpoint.
is it kind of regularization or a kind of more aggressive learning strategy?
very impressive work! if you can share your precious idea with us will be a huge benefit!
is quite interesting of this awesome work.
do you ever try that
also I'm wondering the difference between SFT and SPIN, high level imagination from the loss
SPIN not only wanna toward gt, but also wanna leave the mistakes made last checkpoint. is it kind of regularization or a kind of more aggressive learning strategy?
very impressive work! if you can share your precious idea with us will be a huge benefit!