Azure / enterprise-azureai

Unleash the power of Azure AI to your application developers in a secure & manageable way with Azure API Management and Azure Developer CLI.
MIT License
84 stars 39 forks source link

Add additional logging capabilities to AI Proxy #21

Open iMicknl opened 10 months ago

iMicknl commented 10 months ago

Since Azure API Management is not able to do end to end logging on streaming requests, we will need to support this in the AI Proxy. It should have multiple options

Logging options: CosmosDB, Azure Monitor/Log Analytics, Blob Storage

Since many customers have turned off abuse monitoring, this is an important capability to implement.

Key Value Content Type
LoggingStrategy "all" string

Valid options: ["all", "abuse-monitoring", "none"]

iMicknl commented 10 months ago

See https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython.

We need to double check if this currently works for our streaming requests, or if our AI proxy will throw an error due to a different payload.

Example response (non streaming) (api-version=2023-03-15-preview)

{
    "error": {
        "message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766",
        "type": null,
        "param": "prompt",
        "code": "content_filter",
        "status": 400
    }
}

Example response (non streaming) (api-version=2023-10-01-preview)

{
    "error": {
        "message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766",
        "type": null,
        "param": "prompt",
        "code": "content_filter",
        "status": 400,
        "innererror": {
            "code": "ResponsibleAIPolicyViolation",
            "content_filter_result": {
                "hate": {
                    "filtered": false,
                    "severity": "safe"
                },
                "self_harm": {
                    "filtered": false,
                    "severity": "safe"
                },
                "sexual": {
                    "filtered": false,
                    "severity": "safe"
                },
                "violence": {
                    "filtered": true,
                    "severity": "medium"
                }
            }
        }
    }
}

Example response, with jailbreak detection enabled (non streaming) (api-version=2023-10-01-preview) Additional objects can be present in the content_filter_result.

{
    "error": {
        "message": "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766",
        "type": null,
        "param": "prompt",
        "code": "content_filter",
        "status": 400,
        "innererror": {
            "code": "ResponsibleAIPolicyViolation",
            "content_filter_result": {
                "hate": {
                    "filtered": false,
                    "severity": "safe"
                },
                "jailbreak": {
                    "detected": true,
                    "filtered": true
                },
                "self_harm": {
                    "filtered": false,
                    "severity": "safe"
                },
                "sexual": {
                    "filtered": false,
                    "severity": "safe"
                },
                "violence": {
                    "filtered": false,
                    "severity": "safe"
                }
            }
        }
    }
}
azureholic commented 10 months ago

The proxy detects errors already (but there is no implementation yet). We need to figure out what the error is and handle it accordingly.

The proxy will respond to the client with the original error response. Handling applies to token calculation and eventual policy violation logging (feature not present yet).