gyliu513 / langX101

Apache License 2.0
6 stars 4 forks source link

llm guard #182

Closed gyliu513 closed 4 months ago

gyliu513 commented 4 months ago

PR Type

Enhancement


Description


Changes walkthrough ๐Ÿ“

Relevant files
Enhancement
openai-guard.py
Add OpenAI guard script with prompt and response validation

llmguard/openai-guard.py
  • Added a script to demonstrate the use of llm_guard with OpenAI API.
  • Included environment variable setup instructions for OPENAI_API_KEY.
  • Integrated input and output scanners for prompt and response
    validation.
  • Implemented prompt sanitization and response validation logic.
  • +52/-0   

    ๐Ÿ’ก PR-Agent usage: Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    Summary by CodeRabbit

    coderabbitai[bot] commented 4 months ago

    [!WARNING]

    Review failed

    The pull request is closed.

    Walkthrough

    The new file openai-guard.py introduces functionality for securely interacting with the OpenAI API by using input and output scanners to handle sensitive information. It includes the creation of instances for clients and scanners, scanning prompts before API calls, and sanitizing responses received from the API.

    Changes

    File Path Change Summary
    llmguard/openai-guard.py Added OpenAI API interaction with input and output scanners to securely handle sensitive data.

    Sequence Diagram(s)

    sequenceDiagram
        participant User
        participant OpenAI_Guard
        participant OpenAI_Client
        participant Vault
        participant Input_Scanner
        participant Output_Scanner
    
        User->>OpenAI_Guard: Provide prompt
        OpenAI_Guard->>Input_Scanner: Scan prompt
        Input_Scanner->>OpenAI_Guard: Return sanitized prompt, validation results, score
        OpenAI_Guard->>OpenAI_Client: Request completion with sanitized prompt
        OpenAI_Client->>OpenAI_Guard: Return response
        OpenAI_Guard->>Output_Scanner: Scan response
        Output_Scanner->>OpenAI_Guard: Return sanitized response, validation results, score
        OpenAI_Guard->>User: Provide sanitized response

    Poem

    In the land of code where secrets lie,
    A guardian was born, to soar the sky.
    With scanners keen and vaults so tight,
    It guards our prompts, both day and night.
    An OpenAI shield, secure and bright!
    ๐ŸŒŸ๐Ÿ”๐Ÿ‡


    Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

    Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
    Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit .` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai generate interesting stats about this repository and render them as a table.` - `@coderabbitai show all the console.log statements in this repository.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` - `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (invoked as PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. Additionally, you can add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. ### CodeRabbit Configration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
    github-actions[bot] commented 4 months ago

    PR Reviewer Guide ๐Ÿ”

    โฑ๏ธ Estimated effort to review: 3 ๐Ÿ”ต๐Ÿ”ต๐Ÿ”ตโšชโšช
    ๐Ÿงช No relevant tests
    ๐Ÿ”’ Security concerns

    Sensitive information exposure:
    The script handles sensitive information such as credit card numbers and personal identifiers. It's crucial to ensure that these data are properly sanitized and that the sanitization methods are robust against various types of injection and leakage.
    โšก Key issues to review

    **Possible Bug:** The script uses environment variables for API keys, which is generally secure, but there should be additional checks or warnings if the API key is not set, to prevent runtime errors. **Security Risk:** The prompt includes sensitive information (e.g., credit card numbers, IP addresses). Even though there is a sanitization step, the initial inclusion of such data in the script might pose a risk if not handled correctly. **Performance Concern:** The script processes each prompt and response synchronously. For high throughput or low latency requirements, this might not be optimal.
    github-actions[bot] commented 4 months ago

    PR Code Suggestions โœจ

    CategorySuggestion                                                                                                                                    Score
    Possible issue
    Add error handling for missing API key environment variable ___ **To improve the robustness of the code, add error handling for the retrieval of the
    OPENAI_API_KEY from the environment. This ensures that the program can gracefully
    handle cases where the API key is not set, and provide a user-friendly error
    message.** [llmguard/openai-guard.py [18]](https://github.com/gyliu513/langX101/pull/182/files#diff-9855d94ed635bef28b2c7106c38ffc0296e11189c8bd9c89cea7fdef530db109R18-R18) ```diff -client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) +api_key = os.getenv("OPENAI_API_KEY") +if not api_key: + raise ValueError("OPENAI_API_KEY is not set. Please set the environment variable.") +client = OpenAI(api_key=api_key) ```
    Suggestion importance[1-10]: 9 Why: Adding error handling for the API key retrieval is crucial for robustness. It ensures the program can gracefully handle cases where the API key is not set, providing a user-friendly error message and preventing potential runtime errors.
    9
    Maintainability
    Replace exit(1) with raising an exception for better error handling ___ **Replace the direct use of exit(1) with raising an exception. This change makes the
    code more modular and testable by allowing exceptions to be caught and handled by
    calling functions, rather than exiting the program directly.** [llmguard/openai-guard.py [31]](https://github.com/gyliu513/langX101/pull/182/files#diff-9855d94ed635bef28b2c7106c38ffc0296e11189c8bd9c89cea7fdef530db109R31-R31) ```diff -exit(1) +raise Exception("Invalid prompt detected.") ```
    Suggestion importance[1-10]: 8 Why: Replacing `exit(1)` with raising an exception improves the modularity and testability of the code. It allows exceptions to be caught and handled by calling functions, making the code more maintainable.
    8
    Refactor repeated validation logic into a function for better maintainability ___ **To enhance code readability and maintainability, consider using a loop to handle the
    repeated logic of checking results_valid values and handling errors in both prompt
    and output validation sections.** [llmguard/openai-guard.py [29-50]](https://github.com/gyliu513/langX101/pull/182/files#diff-9855d94ed635bef28b2c7106c38ffc0296e11189c8bd9c89cea7fdef530db109R29-R50) ```diff -if any(results_valid.values()) is False: - print(f"Prompt {prompt} is not valid, scores: {results_score}") - exit(1) +def validate_results(results_valid, results_score, text_type, text): + if any(results_valid.values()) is False: + print(f"{text_type} {text} is not valid, scores: {results_score}") + raise Exception(f"Invalid {text_type.lower()} detected.") +validate_results(results_valid, results_score, "Prompt", prompt) ... -if any(results_valid.values()) is False: - print(f"Output {response_text} is not valid, scores: {results_score}") - exit(1) +validate_results(results_valid, results_score, "Output", response_text) ```
    Suggestion importance[1-10]: 8 Why: Refactoring the repeated validation logic into a function enhances code readability and maintainability. It reduces code duplication and centralizes the validation logic, making future updates easier.
    8
    Security
    Improve security by encapsulating the API key retrieval in a function ___ **Consider using a more secure method to handle sensitive data such as API keys.
    Instead of directly fetching the API key from the environment variable in the global
    scope, use a function to encapsulate the retrieval logic. This approach enhances
    security by limiting the scope of the API key and provides a single point for
    managing access to it.** [llmguard/openai-guard.py [18]](https://github.com/gyliu513/langX101/pull/182/files#diff-9855d94ed635bef28b2c7106c38ffc0296e11189c8bd9c89cea7fdef530db109R18-R18) ```diff -client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) +def get_api_key(): + return os.getenv("OPENAI_API_KEY") +client = OpenAI(api_key=get_api_key()) ```
    Suggestion importance[1-10]: 7 Why: Encapsulating the API key retrieval in a function enhances security by limiting the scope of the API key and provides a single point for managing access to it. However, it is a minor improvement and does not address any critical security issues.
    7