Open sam-h-long opened 14 hours ago
The backend follows the protocol described here: https://github.com/microsoft/ai-chat-protocol/tree/main/spec#microsoft-ai-chat-protocol-api-specification-version-2024-05-29
Let me know if you have additional questions after reading that.
This issue is for a: (mark with an
x
)Minimal steps to reproduce
Similar to #836 I am curious about the backend api calls that are needed to create a chat experience. I started by running the azd command steps in the deploy part of the README.md to copy the code locally and set credentials.
Next within the following directory (
azure-search-openai-demo/app/backend/
) I created a virtual Python env:Lastly, ran quart to get the backend to run locally:
Any log messages given by the failure
Expected/desired behavior
Calling the
chat/
endpoint the following way: (unsure if myoverrides
are redundant..?):Tracing the application with Datadog I am pretty sure the format of the response comes from
run_without_streaming()
.If so, in the frontend my guess implementation is just the previous
"message"
is passed to the nextchat/
call before the next question? For example,In other words, I am trying to understand if the "context" or "session_state" output from
run_without_streaming()
would get passed to the nextchat/
call as well? Any further documentation of APIs needed to simulate a chat experience would be great!OS and Version?
azd version?
azd version 1.10.3 (commit 0595f33fe948ee6df3da492567e3e7943cb9a733)
Versions
Mention any other details that might be useful
While some additional documentation on the prompt sequences would be useful, overall this is an a really great repository. In contrast, to sample-app-aoai-chatGPT I have found understanding the calls made to the Search Client & Azure OpenAI APIs much easier to follow. Great work and thank you 🙌 🙌