protectai / rebuff

LLM Prompt Injection Detector
https://playground.rebuff.ai
Apache License 2.0
1.06k stars 73 forks source link

Update rebuff.py #24

Closed Riyan-Mo closed 7 months ago

Riyan-Mo commented 1 year ago

Renamed the attributes in DetectApiRequest and DetectApiSuccessResponse classes to use snake_case as per PEP 8 naming conventions.

Updated the headers attribute in the Rebuff class to use snake_case.

Moved the response.raise_for_status() call to a separate line for better error handling.

Simplified the injection detection logic in the detect_injection method by directly checking the conditions and returning the outcome.

Updated the access to the vector_score dictionary in the injection detection logic to use the correct key name top_score.

Fixed the casing of the generate_canary_word method and added a type hint to the length parameter.

Removed the redundant return statement in the log_leakage method.

These improvements enhance the code's readability, maintainability, and adherence to Python coding standards.

woop commented 1 year ago

Thanks for the updated code @riyan911 - It seems like the tests are failing after this PR has been introduced. The langchain ones should fail because external PRs don't have access to secrets by default, but the integration tests should be passing. Does it work when you run it locally?

Riyan-Mo commented 1 year ago

Well I edited them in github editor itself so I didn't run it locally. I'll set it up locally and test it and tell you.