This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.
Hi!
Thanks for your great work.
I find the task parameter in all example codes(run_baseline_lm.py) is 'qa', but in self-RAG, the task could be different values, such as arc_c, fever, asqa, factscore.
So if I'd like to run run_baseline_lm.py, should I set different values for the task parameter according to different tasks?
Or just set the parameter to 'qa' is fine for all kinds of tasks?
And the same question for the parameter 'max_new_tokens', should it be set to the same value as the self-RAG's setting in each particular task for compare?
Hi! Thanks for your great work. I find the task parameter in all example codes(run_baseline_lm.py) is 'qa', but in self-RAG, the task could be different values, such as arc_c, fever, asqa, factscore. So if I'd like to run run_baseline_lm.py, should I set different values for the task parameter according to different tasks? Or just set the parameter to 'qa' is fine for all kinds of tasks?
Thanks.