Open DevMandal-Sarvam opened 1 month ago
Hi! Prompting usually require a lot trials to find the best one that suits to your use case. Here is the prompt hub from langchain that you can search and play with. As for the position of tool descriptions, I believe it could be important if the context is long, but more researchs will be needed to determine whether the position matters or not. Here is a long context information retrieval test call Needle In A Haystack that people run to test the performance of LLMs under long context situation.
I would also recommend this paper to understand more about "How Language Models Use Long Contexts" and it shows that the position of the documents matters.
There seems to be ample prompt resources for niche applications(interview helper, essay writing) but there seems to be almost nothing for general-purpose agents with multpiple tools, states and specific output formats.
So basically, what prompt structure should one follow to get started with building agents?
Additionally, are there any resources on whether the position of important instructions in the prompt has an effect on performance? Eg: will having the tool descriptions at the end of the prompt produce better results than having it in the middle?