CASIA-LM / MoDS

111 stars 11 forks source link

Could you please provide more details in training DeBERTa as a reward model? #2

Open 4daJKong opened 9 months ago

4daJKong commented 9 months ago

Once you mentioned,

This is a reward model designed based on the DeBERTa architecture, and is trained on four different types of human feedback data , endowing it with the abilities of QA model evaluation, reward scoring, and detecting potential toxic response via ranking.

However, I'm curious about the dataset and methodology employed in training this reward model. Is it exclusively optimized for English QA datasets? I observed that when applied to a Chinese QA dataset, it consistently yielded poor scores. Appreciate your insight on this matter.