Open Vegoo89 opened 1 year ago
I am very interested in your suggestions because I also encounter the same problem as you mentioned. Can you state in detail the structure you built for query messages history (user queries along with optimized query response from OpenAI)
?
Currently in query messages history
we keep only user questions + optimized search query from OpenAI endpoint. Example (I wrote it myself now, didn't copy it from actual queries):
[
{
"role": "user"
"content": "what is abc?"
},
{
"role": "assistant"
"content": "abc definition"
},
{
"role": "user"
"content": "what is def?"
},
{
"role": "assistant"
"content": "def definition"
},
{
"role": "user"
"content": "define both"
},
{
"role": "assistant"
"content": "definition of abc and def"
},
]
As stated, we glue prompt + few shots on start and add current user query in the end (with the prefix). After that prefix is not present anymore in 'real' query messages history.
Thanks so much for sharing your approach! I'm going to CC @srbalakr from the ACS team who worked most recently on the query generation for their thoughts. I think PRs are always great to share with the community, even those that don't get merged, but if this produces overall better response quality across many queries/knowledge bases, then we may want it in main.
Yes please share the PR, I have also put up a PR to stabilize the generation for lengthy chats using function calls. It should address most of the concerns.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this issue will be closed.
Re-opened, I'm still interested in this. I don't have multi-turn evaluation setup yet, only single-turn (as you can see in https://github.com/Azure-Samples/azure-search-openai-demo/pull/967) so I haven't been able to evaluate this change programmatically.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this issue will be closed.
This issue is for a: (mark with an
x
)This is not really a bug so I am marking it as
feature
We are productionizing this PoC for corporate usage and found out few things that make the bot works better / smoother and generate more predictable queries that are sent to Cognitive Search - at least in our tests.
Right now in code, optimized search query in
chatreadretrieveread.py
is generated by gluing together:This works pretty well, however on longer conversation chain we found out that that query can get messy, as after few shots there is real conversation history - with question answers - which seems out of place here.
We came up with a simple idea of keeping history of user questions and queries generated by the bot as separate field in the request and response, which allows as to bounce these and keep the backend stateless.
So in the end - after implementation - generation of optimized search query messages would look like this:
If you guys think this approach sounds good, I can open PR with proposed changes. Thanks!