@motin Currently my code constructs search space with the combination of prompt variations and model configs. Should we add questions into the search space too? I didn't add question because we can query all questions with LLMChain.apply(questions) so I don't have to write a for loop. But then when we add more questions, we should re-evaluate all questions.
Thanks for posing this question, it is a good one. I suggest we include questions in the search space so that we are independent of which underlying llm framework we use
@motin Currently my code constructs search space with the combination of prompt variations and model configs. Should we add questions into the search space too? I didn't add question because we can query all questions with
LLMChain.apply(questions)
so I don't have to write a for loop. But then when we add more questions, we should re-evaluate all questions.