thank you for your contribution to prompt research!!! My seminar partner and I are trying to understand your code and test the evaluation step separately with different LLMs within a course at our university. To do this, we would like to extract the generated prompts and evaluate/rank them separately in an evaluation step with different LLMs.
Could you provide us with a JSON or similar file with the prompts per task type, in case you used more than the limit of 50 prompts per run when testing your code and saved a corresponding file (our resources do not allow us to do this for every task in davanci).
Hi Keiran,
thank you for your contribution to prompt research!!! My seminar partner and I are trying to understand your code and test the evaluation step separately with different LLMs within a course at our university. To do this, we would like to extract the generated prompts and evaluate/rank them separately in an evaluation step with different LLMs.
Could you provide us with a JSON or similar file with the prompts per task type, in case you used more than the limit of 50 prompts per run when testing your code and saved a corresponding file (our resources do not allow us to do this for every task in davanci).
Thanks in advance and have a nice day!
Louis