A principled instruction benchmark on formulating effective queries and prompts for large language models (LLMs). Our paper: https://arxiv.org/abs/2312.16171
I am looking at this project, but I don't see any of the principles that you describe, and modified prompts? I'm pretty confused, and there are no instructions for people to run their own generate.py since you'd need people to install OpenAI.
All I see are the written out instructions + outputs from models. But where are the modified prompts for each principle? Am I missing that?
I am looking at this project, but I don't see any of the principles that you describe, and modified prompts? I'm pretty confused, and there are no instructions for people to run their own
generate.py
since you'd need people to install OpenAI.All I see are the written out instructions + outputs from models. But where are the modified prompts for each principle? Am I missing that?