YangLing0818 / SuperCorrect-llm

SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
https://arxiv.org/abs/2410.09008
35 stars 1 forks source link

Will the Hierarchical Thought Template training data and Cross-model Collaborative DPO training data be open-sourced? #2

Open WuXnkris opened 1 week ago

YangLing0818 commented 4 days ago

No description provided.

Yes, we will open-source all the training data and code upon the paper acceptance.