protectai / llm-guard

The Security Toolkit for LLM Interactions
https://llm-guard.com/
MIT License
1.13k stars 141 forks source link

Security bug in example for "protectai/llm-guard/blob/main/examples/openai_api.py" #184

Open jdwhitaker opened 2 weeks ago

jdwhitaker commented 2 weeks ago

Describe the bug

Line 27 of protectai/llm-guard/blob/main/examples/openai_api.py says if any(results_valid.values()) is False:

any(results_valid.values()) evaluates to True if any of the values are True. So this security check will pass if any of the values is true.

To Reproduce Steps to reproduce the behavior:

  1. Go to https://github.com/protectai/llm-guard/blob/main/examples/openai_api.py line 27

Expected behavior It should fail if any of the values is false.

asofter commented 1 week ago

Hey @jdwhitaker , thanks for submitting this bug report. I will fix it. Indeed, there is an issue