Sharing the learning along the way we been gathering to enable Azure OpenAI at enterprise scale in a secure manner. GPT-RAG core is a Retrieval-Augmented Generation pattern running in Azure, using Azure Cognitive Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
[ ] Review Content Safety Features to check if it adds value over the AOAI OOTB content filtering.
[ ] Migrate blocked words list (AOAI filtering)
Item description
User should be able to define what functions from Responsible AI plugin he/she wants to use as a guardrail when receiving the ask from the user and before sending the response back to the user and its thresholds.
Users can configure what functions they want to use and thresholds in gpt-rag configuration.
Items that can be met using native Azure OpenAI content filtering should do it so we save API calls.
Orchestator responses should contain metadata information about guardrails responses so future APIM or Security function can check them to enforce.
Out of scope items to be handled in a separated item:
1) IaaS (bicep) update to create and configure content safety service
2) Architecture redesign:
Create a new Azure Function "Custom Security Policy" that will receive the text from the Orchestrator and validate the content does not have violence, sexual, etc.
This function is the beggining of Security Function to add controls of security to the platform, further will be introduced additional security controls.
We need to prepare this function so the Security Team can add additional controls (i.e. Microsoft Purview, etc)
List of tasks: (see this item description below)
Item description
User should be able to define what functions from Responsible AI plugin he/she wants to use as a guardrail when receiving the ask from the user and before sending the response back to the user and its thresholds.
List of functions:
Notes:
Out of scope items to be handled in a separated item:
1) IaaS (bicep) update to create and configure content safety service
2) Architecture redesign: Create a new Azure Function "Custom Security Policy" that will receive the text from the Orchestrator and validate the content does not have violence, sexual, etc. This function is the beggining of Security Function to add controls of security to the platform, further will be introduced additional security controls.![Image](https://github.com/Azure/gpt-rag-orchestrator/assets/6539041/df70caac-8ae4-4d75-8c5f-47fbfbbc147a)
We need to prepare this function so the Security Team can add additional controls (i.e. Microsoft Purview, etc)
References: https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/azure-ai-announces-prompt-shields-for-jailbreak-and-indirect/ba-p/4099140
https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/detect-and-mitigate-ungrounded-model-outputs/ba-p/4099261%23:~:text=Today%2520Azure%2520AI%2520makes%2520this,Copilots%2520and%2520document%2520summarization%2520applications.