A Solution Accelerator for the RAG pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences. This includes most common requirements and best practices.
To demonstrate a lower code solution to chatting with your data. This could also be used in future to link with other experiment projects to auto generate the configuration for the flow.
How would you feel if this feature request was implemented?
Requirements
Implement a basic version of chat with your data using prompt flow as the back end
Allow the deployment/configuration of this via the current deployment mechanism
Document this option exists and explain when to use it
Stretch: implement all features of chat with your data using prompt flow
Tasks
To be filled in by the engineer picking up the issue
[x] #701
[ ] #915
[ ] Add a new Orchestrator "promptflow" that calls this promptflow
[ ] Add vscode tooling for PromptFlow
[ ] Add documentation to state that the prompt is tied to the model and if you change the model, you should investigate whether the prompt is still the best. Hence why we adding PromptFlow
Motivation
To demonstrate a lower code solution to chatting with your data. This could also be used in future to link with other experiment projects to auto generate the configuration for the flow.
How would you feel if this feature request was implemented?
Requirements
Tasks
To be filled in by the engineer picking up the issue