Tracking instruction-tuned LLM openness. Paper: Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.
We released Storm-7B, the first open-source language model comparable to the GPT-4 series on the AlpacaEval 2.0 leaderboard, ranking 3rd in length-controlled win rate.
The recipe for this model is simple: 1) fine-tuning from Openchat-3.5-0106, 2) applying iterative DPO training, a variant of DPO where a language model iteratively learns from the preferences of the trained reward model. We will release our technical report and code as soon as possible.
Slight doubt whether this qualifies as an instruction-tuned model; no details available yet.
https://huggingface.co/jieliu/Storm-7B
Slight doubt whether this qualifies as an instruction-tuned model; no details available yet.