A Solution Accelerator for the RAG pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences. This includes most common requirements and best practices.
This is rather a question than a feature request.
Is the code able to handle calls to multiple AI Search indexes (in the same or in different AI Search services)?
As I see, the CWYD accelerator allows for semantic kernel orchestration strategy.
Based on some readings,semantic kernel SDK includes the concept of a “planner“. The planner orchestrates multiple calls the Azure OpenAI service to make a plan to answer the user’s questions.
It also lets you create “plugins“, which can be calls out to native code.
Question is: is the code able to handle the said "plugins"?
Where and how to change it in order to include calls to multiple AI Search indexes?
How would you feel if this feature request was implemented?
Motivation
This is rather a question than a feature request. Is the code able to handle calls to multiple AI Search indexes (in the same or in different AI Search services)? As I see, the CWYD accelerator allows for semantic kernel orchestration strategy. Based on some readings,semantic kernel SDK includes the concept of a “planner“. The planner orchestrates multiple calls the Azure OpenAI service to make a plan to answer the user’s questions.
It also lets you create “plugins“, which can be calls out to native code. Question is: is the code able to handle the said "plugins"? Where and how to change it in order to include calls to multiple AI Search indexes?
How would you feel if this feature request was implemented?
Very happy!