BenderScript / PromptGuardian

All-in-one App that Checks LLM prompts for Injection, Data Leaks and Malicious URLs.
Apache License 2.0
3 stars 1 forks source link

Allow clients to select which functions to use in the request #5

Closed panyuenlau closed 6 months ago

panyuenlau commented 6 months ago

Added optional parameters in the request to allow clients to select which checks they'd like the server to run, example request body:

{
    "text": "test",
    "extractedUrls": [],
    "check_url": true,
    "check_openai": false,
    "check_gemini": false,
    "check_azure": false,
    "check_threats": false
}

Corresponding response:

{
    "prompt_injection": {
        "azure": "Prompt injection detection with Azure is disabled per user request",
        "gemini": "Prompt injection detection with Gemini is disabled per user request",
        "openai": "Prompt injection detection with OpenAI is disabled per user request"
    },
    "url_verdict": "No malware URL(s) detected",
    "threats": "Prompt check for DLP wit Umbrella is disabled per user request"
}
panyuenlau commented 6 months ago

@BenderScript @vhosakot plz review when you have some spare time

vhosakot commented 6 months ago

lgtm, thanks @panyuenlau!