ajitesh123 / auto-review-ai

๐Ÿš€ AI-Powered Performance Review Generator
https://perfor-ai.streamlit.app/
3 stars 1 forks source link

Master #18

Closed ajitesh123 closed 4 months ago

ajitesh123 commented 4 months ago

PR Type

Enhancement, Dependencies


Description


Changes walkthrough ๐Ÿ“

Relevant files
Enhancement
app.py
Add support for multiple LLMs and update UI                           

src/app.py
  • Added support for multiple LLMs (OpenAI, Google, Anthropic, Groq).
  • Refactored code to use a more modular approach for LLM integration.
  • Updated Streamlit UI to include options for selecting LLM type and
    model size.
  • +46/-35 
    llm.py
    Implement module for handling multiple LLMs                           

    src/llm.py
  • Introduced a new module for handling different LLMs.
  • Implemented classes for OpenAI, Google, Anthropic, and Groq LLMs.
  • Added environment setup and model mapping for each LLM.
  • +156/-0 
    Dependencies
    requirements.txt
    Update dependencies for new LLMs                                                 

    requirements.txt
  • Added new dependencies for Anthropic, Groq, and Google Generative AI.
  • +3/-0     

    ๐Ÿ’ก PR-Agent usage: Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    Summary by CodeRabbit

    coderabbitai[bot] commented 4 months ago

    Walkthrough

    The recent updates introduce a slew of new dependencies and overhaul existing application logic to support multiple language models. Primarily, custom language model classes replace previous dependencies, enhancing flexibility and expandability. Additionally, several new modules and routes for API functionalities and a FastAPI-based interface are included to enrich the system's ability to generate performance reviews and self-reviews utilizing various LLMs.

    Changes

    File Path Change Summary
    requirements.txt Added dependencies: anthropic==0.25.8, groq==0.5.0, and google-generativeai.
    src/app.py Replaced langchain.chat_models, introduced custom LLM classes, refactored functions.
    src/llm.py New module defining custom language model classes for OpenAI, Anthropic, Groq, and Google APIs.
    app_fastapi.py New file introducing FastAPI functionality with endpoints for generating reviews.
    perf-ui/vercel.json New Vercel configuration file for React project deployment.
    review.py, self_review.py New files introducing review and self-review generation logic, parsing, and LLM interaction.
    app.py Introduced performance review generation using various LLMs via a Streamlit UI.

    Sequence Diagram(s)

    sequenceDiagram
        participant User
        participant UI as UI (app.py)
        participant Server as Server (app_fastapi.py)
        participant LLM as Language Models (src/llm.py)
    
        User->>UI: Input review details
        UI->>Server: Send review request
        Server->>Server: Validate request, choose LLM
        Server->>LLM: Request text generation with prompt
        LLM-->>Server: Return generated text
        Server-->>UI: Send generated review
        UI-->>User: Display generated review

    Poem

    In fields of green where rabbits play, Code changes come our way, New models speak, and reviews they sing, With Groq, Anthropic on Spring's wing, FastAPI routes and Streamlit's glow, Our application's set to grow! ๐ŸŒธ


    Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

    Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
    Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit .` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai generate interesting stats about this repository and render them as a table.` - `@coderabbitai show all the console.log statements in this repository.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` - `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (invoked as PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. Additionally, you can add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. ### CodeRabbit Configration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
    codiumai-pr-agent-pro[bot] commented 4 months ago

    PR Reviewer Guide ๐Ÿ”

    โฑ๏ธ Estimated effort to review [1-5] 4
    ๐Ÿงช Relevant tests No
    ๐Ÿ”’ Security concerns No
    โšก Key issues to review Error Handling:
    The new create_llm_instance function in app.py does not handle the case where an invalid LLM type might be passed other than through a direct exception. It would be beneficial to have a more graceful handling or user feedback mechanism.
    Dependency Management:
    The PR introduces new dependencies (anthropic, groq, google-generativeai) which are added to requirements.txt. Ensure that these libraries are compatible with the existing system and do not introduce conflicts.
    Code Duplication:
    The generate_text and stream_text methods in the LLM classes in llm.py have repeated code for setting up model configurations. Consider refactoring to reduce duplication and improve maintainability.
    codiumai-pr-agent-pro[bot] commented 4 months ago

    PR Code Suggestions โœจ

    CategorySuggestion                                                                                                                                    Score
    Enhancement
    Use the selected model size from the sidebar instead of hardcoding it in the function call ___ **The generate_review function currently defaults the model_size to "small" when the button
    is clicked. It would be more intuitive to use the selected model_size from the sidebar
    instead of hardcoding it.** [src/app.py [87]](https://github.com/ajitesh123/Perf-Review-AI/pull/18/files#diff-04791d82dd15fdd480f084d7ef65a10789fa5012cb7935f76080763444d48a00R87-R87) ```diff -review = generate_review(your_role, candidate_role, perf_question, your_review, llm_type, user_api_key, model_size="small") +review = generate_review(your_role, candidate_role, perf_question, your_review, llm_type, user_api_key, model_size) ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 9 Why: This suggestion correctly identifies a usability improvement by dynamically using the user-selected model size instead of a hardcoded value. This change enhances the flexibility and user experience of the application.
    9
    Possible issue
    Add a check to ensure the API key is provided before generating the review ___ **Add a check to ensure that user_api_key is not empty before calling generate_review. This
    will prevent potential errors when the API key is missing.** [src/app.py [86-88]](https://github.com/ajitesh123/Perf-Review-AI/pull/18/files#diff-04791d82dd15fdd480f084d7ef65a10789fa5012cb7935f76080763444d48a00R86-R88) ```diff if st.button('Write Review'): - review = generate_review(your_role, candidate_role, perf_question, your_review, llm_type, user_api_key, model_size) - st.markdown(review) + if not user_api_key: + st.error("API Key is required to generate a review.") + else: + review = generate_review(your_role, candidate_role, perf_question, your_review, llm_type, user_api_key, model_size) + st.markdown(review) ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 8 Why: The suggestion to check for an empty API key before proceeding with generating a review is crucial for preventing runtime errors and improving user feedback, making it a valuable enhancement.
    8
    Ensure the generate_prompt function handles cases where perf_question might be empty ___ **The generate_prompt function should include a check to ensure perf_question is not empty
    before using it in the prompt to avoid generating incomplete prompts.** [src/app.py [30-57]](https://github.com/ajitesh123/Perf-Review-AI/pull/18/files#diff-04791d82dd15fdd480f084d7ef65a10789fa5012cb7935f76080763444d48a00R30-R57) ```diff prompt = f""" I'm {your_role}. You're an expert at writing performance reviews. On my behalf, help answer the question for performance reviews below. {delimiter} Instructions {delimiter}: - Use the context below to understand my perspective of working with them - Keep the role of the person I'm reviewing, {candidate_role}, in mind when writing the review - Use simple language and keep it to the point - Strictly answer the questions mentioned in "question for performance" {your_review} - {perf_question} + {perf_question if perf_question else ""} {delimiter} Output in markdown format in the following structure:{delimiter} - Q1: Mention the first question in question for performance Your answer - Q2: Mention the second question in question for performance Your answer - Q3: Mention the third question in question for performance Your answer Answer: """ ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 6 Why: This suggestion improves the function by ensuring that it handles cases where `perf_question` is empty, thus avoiding the creation of incomplete prompts. This is a good enhancement for maintaining the quality of generated content.
    6
    Possible bug
    Ensure the generate_text method in GoogleLLM handles cases where the response might not contain text ___ **The generate_text method in GoogleLLM class should handle cases where the response might
    not contain text to avoid potential errors.** [src/llm.py [142-143]](https://github.com/ajitesh123/Perf-Review-AI/pull/18/files#diff-58e450f7066faac2baafb2741e83fa898cf37ef88b7cb49aa26a3851824e7462R142-R143) ```diff response = model.generate_content(prompt) -return response.text +return response.text if response.text else "" ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 7 Why: This suggestion addresses a potential bug where the response might not contain text, which could lead to errors. Adding a check to handle such cases improves the robustness of the code.
    7