ajitesh123 / auto-review-ai

πŸš€ AI-Powered Performance Review Generator
https://perfor-ai.streamlit.app/
3 stars 1 forks source link

Add audio input and transcription to performance review generation #132

Closed ajitesh123 closed 2 months ago

ajitesh123 commented 2 months ago

Purpose

This PR adds the ability to generate performance reviews based on both text input and audio input. Users can now record audio and have it transcribed using the Groq API, and the transcription will be included in the context provided to the language model when generating the performance review.

Critical Changes

===== Original PR title and description ============

Original Title: ## Summary

Original Description: This pull request adds the ability for users to provide audio input for their performance reviews and self-reviews. Previously, users had to type their input, but now they can record their audio. The audio is converted from speech to text using the Whisper model on the Groq library, and the resulting text is used as input to the review generation process.

Main Changes

  1. Added the streamlit-audio-record library to the project, which allows users to record audio input within the Streamlit application.
  2. Implemented the convert_speech_to_text function using the Whisper model in the Groq library to convert the recorded audio to text.
  3. Updated the ReviewRequest and SelfReviewRequest models to include an optional audio_review field, and updated the generate_prompt and generate_review functions to use this field if it is provided.
  4. Updated the app.py and app_fastapi.py files to handle the audio input, convert it to text, and pass it to the review generation process.
  5. Added unit tests to ensure the audio input functionality works as expected.

Impact

With these changes, users can now provide audio input for their performance reviews and self-reviews, which can make the process more convenient and natural for them. The audio input is automatically converted to text and used in the review generation process, ensuring a seamless experience for the users.

coderabbitai[bot] commented 2 months ago

[!IMPORTANT]

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit .` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai generate interesting stats about this repository and render them as a table.` - `@coderabbitai show all the console.log statements in this repository.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` - `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (Invoked using PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. ### Other keywords and placeholders - Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. - Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description. - Add `@coderabbitai` anywhere in the PR title to generate the title automatically. ### CodeRabbit Configuration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
archie-ai-code-explain-pr-review[bot] commented 2 months ago

PR Review Summary Celebratory GIF

Overall Review:

This PR introduces audio input functionality for performance reviews and self-reviews, enhancing user experience. The implementation includes speech-to-text conversion using the Whisper model via Groq, updates to the review generation process to incorporate audio input, and corresponding UI changes. While the overall implementation appears solid, there are a few areas that require attention, particularly regarding error handling and potential security implications of handling audio data.


⚑ Logical Error Analysis

1. [Blocker] Duplicate function call in app.py: There's a redundant call to `generate_review()` on lines 106 and 107. The second call overwrites the result of the first, potentially ignoring the audio input.

πŸ”’ Security Analysis

2. [Blocker] Exposure of API keys:
The user's API key is directly passed to various functions and stored in the Streamlit session state, which could lead to accidental exposure.

🌟 Code Quality And Design

3. [Consider] Refactor the `generate_review` function to use a strategy pattern for different input types. The current implementation of `generate_review` in app.py (line 79) and review.py (line 92) has multiple parameters and handles both text and audio input.

πŸ§ͺ Test Coverage Analysis

4. [Consider] The new `test_audio_input.py` file provides basic unit tests for the audio functionality, which is a good start. However, it lacks coverage for error cases and edge scenarios.

Recommendations

Recommendation #1 Remove the duplicate call on line 107 in app.py. The corrected code should look like this: ```python if st.button('Write Review'): audio_review = None if audio_bytes is not None: audio_review = convert_speech_to_text(audio_bytes, user_api_key) if audio_review: st.write("Transcribed Audio:", audio_review) review = generate_review(your_role, candidate_role, perf_question, your_review, llm_type, user_api_key, model_size, audio_review) st.markdown(review) ``` This ensures that the audio input is properly considered in the review generation process.
Recommendation #2 Implement a secure key management system or use environment variables for API keys instead of passing them directly through the application. Here's an example of how to use environment variables: 1. Add a `.env` file to your project root (make sure to add it to .gitignore): ``` GROQ_API_KEY=your_api_key_here ``` 2. Install the python-dotenv package: ``` pip install python-dotenv ``` 3. In your app.py, load the environment variables: ```python import os from dotenv import load_dotenv load_dotenv() # Use the API key groq_api_key = os.getenv('GROQ_API_KEY') ``` 4. Update the `convert_speech_to_text` function to use the environment variable: ```python def convert_speech_to_text(audio_bytes): client = Groq(api_key=os.getenv('GROQ_API_KEY')) # ... rest of the function ``` This approach keeps sensitive information out of your codebase and reduces the risk of accidental exposure.
Recommendation #3 Implement a strategy pattern to separate the logic for different input types. This will improve the function's maintainability and make it easier to add new input types in the future. Here's an example implementation: 1. Create a new file `review_strategies.py`: ```python from abc import ABC, abstractmethod class ReviewInputStrategy(ABC): @abstractmethod def process_input(self, input_data): pass class TextReviewStrategy(ReviewInputStrategy): def process_input(self, text_input): return text_input class AudioReviewStrategy(ReviewInputStrategy): def process_input(self, audio_input): return convert_speech_to_text(audio_input) class CombinedReviewStrategy(ReviewInputStrategy): def process_input(self, text_input, audio_input): audio_text = convert_speech_to_text(audio_input) if audio_input else "" return f"{text_input}\n\nAudio Review Transcript: {audio_text}" ``` 2. Update the `generate_review` function in `review.py`: ```python from review_strategies import CombinedReviewStrategy def generate_review(your_role, candidate_role, perf_question, your_review, llm_type, user_api_key, model_size, audio_review=None): perf_question = perf_question or DEFAULT_QUESTIONS input_strategy = CombinedReviewStrategy() processed_input = input_strategy.process_input(your_review, audio_review) prompt = generate_prompt(your_role, candidate_role, perf_question, processed_input) llm = create_llm_instance(llm_type, user_api_key) response = get_completion(prompt, llm, model_size) return parse_llm_response(response) ``` This approach makes the code more modular and easier to extend with new input types in the future.
Recommendation #4 Enhance the test coverage by adding more test cases, including error scenarios and edge cases. Here are some additional tests to consider adding to `test_audio_input.py`: ```python import unittest from unittest.mock import patch, MagicMock from app import convert_speech_to_text from review import generate_review from self_review import generate_self_review class TestAudioInput(unittest.TestCase): # ... existing tests ... @patch('app.Groq') def test_convert_speech_to_text_error(self, mock_groq): mock_client = MagicMock() mock_groq.return_value = mock_client mock_client.audio.transcriptions.create.side_effect = Exception("API Error") audio_bytes = b"dummy audio data" with self.assertRaises(Exception): convert_speech_to_text(audio_bytes, "fake_api_key") def test_generate_review_no_audio(self): result = generate_review("Manager", "Employee", "How did they perform?", "Good work", "openai", "fake_api_key", "small") self.assertIsNotNone(result) self.assertTrue(len(result) > 0) def test_generate_self_review_empty_audio(self): result = generate_self_review("Text dump", ["Q1"], None, "openai", "fake_api_key", "small", "") self.assertIsNotNone(result) self.assertTrue(len(result) > 0) @patch('review.create_llm_instance') def test_generate_review_long_audio(self, mock_create_llm): mock_llm = MagicMock() mock_create_llm.return_value = mock_llm mock_llm.generate_text.return_value = "Q1A1" long_audio = "A" * 10000 # Simulate a long audio transcript result = generate_review("Manager", "Employee", "How did they perform?", "Good work", "openai", "fake_api_key", "small", long_audio) self.assertEqual(len(result), 1) self.assertEqual(result[0]["question"], "Q1") self.assertEqual(result[0]["answer"], "A1") ``` These additional tests cover error handling, cases without audio input, empty audio input, and long audio transcripts. This will help ensure the robustness of the audio input functionality across various scenarios.

[Configure settings at: Archie AI - Automated PR Review]

ajitesh123 commented 2 months ago

/help

archie-ai-code-explain-pr-review[bot] commented 2 months ago

Archie AI Commands

Here are the available commands you can use:

You can use either the @archieai prefix or the shorter / prefix for each command. For example, both @archieai ask and /ask will work.

Additionally, in review comments on specific lines of code:

codiumai-pr-agent-pro[bot] commented 2 months ago

PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

PR Agent Walkthrough πŸ€–

Welcome to the PR Agent, an AI-powered tool for automated pull request analysis, feedback, suggestions and more.

Here is a list of tools you can use to interact with the PR Agent:

ToolDescriptionTrigger Interactively :gem:
[DESCRIBE](https://pr-agent-docs.codium.ai/tools/describe/) Generates PR description - title, type, summary, code walkthrough and labels - [ ] Run
[REVIEW](https://pr-agent-docs.codium.ai/tools/review/) Adjustable feedback about the PR, possible issues, security concerns, review effort and more - [ ] Run
[IMPROVE](https://pr-agent-docs.codium.ai/tools/improve/) Code suggestions for improving the PR - [ ] Run
[UPDATE CHANGELOG](https://pr-agent-docs.codium.ai/tools/update_changelog/) Automatically updates the changelog - [ ] Run
[ADD DOCS](https://pr-agent-docs.codium.ai/tools/documentation/) πŸ’Ž Generates documentation to methods/functions/classes that changed in the PR - [ ] Run
[TEST](https://pr-agent-docs.codium.ai/tools/test/) πŸ’Ž Generates unit tests for a specific component, based on the PR code change - [ ] Run
[IMPROVE COMPONENT](https://pr-agent-docs.codium.ai/tools/improve_component/) πŸ’Ž Code suggestions for a specific component that changed in the PR - [ ] Run
[ANALYZE](https://pr-agent-docs.codium.ai/tools/analyze/) πŸ’Ž Identifies code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component - [ ] Run
[ASK](https://pr-agent-docs.codium.ai/tools/ask/) Answering free-text questions about the PR [*]
[GENERATE CUSTOM LABELS](https://pr-agent-docs.codium.ai/tools/custom_labels/) πŸ’Ž Generates custom labels for the PR, based on specific guidelines defined by the user [*]
[CI FEEDBACK](https://pr-agent-docs.codium.ai/tools/ci_feedback/) πŸ’Ž Generates feedback and analysis for a failed CI job [*]
[CUSTOM PROMPT](https://pr-agent-docs.codium.ai/tools/custom_prompt/) πŸ’Ž Generates custom suggestions for improving the PR code, derived only from a specific guidelines prompt defined by the user [*]
[SIMILAR ISSUE](https://pr-agent-docs.codium.ai/tools/similar_issues/) Automatically retrieves and presents similar issues [*]

(1) Note that each tool be triggered automatically when a new PR is opened, or called manually by commenting on a PR.

(2) Tools marked with [*] require additional parameters to be passed. For example, to invoke the /ask tool, you need to comment on a PR: /ask "<question content>". See the relevant documentation for each tool for more details.

ajitesh123 commented 2 months ago

/help

codiumai-pr-agent-pro[bot] commented 2 months ago

PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

PR Agent Walkthrough πŸ€–

Welcome to the PR Agent, an AI-powered tool for automated pull request analysis, feedback, suggestions and more.

Here is a list of tools you can use to interact with the PR Agent:

ToolDescriptionTrigger Interactively :gem:
[DESCRIBE](https://pr-agent-docs.codium.ai/tools/describe/) Generates PR description - title, type, summary, code walkthrough and labels - [ ] Run
[REVIEW](https://pr-agent-docs.codium.ai/tools/review/) Adjustable feedback about the PR, possible issues, security concerns, review effort and more - [ ] Run
[IMPROVE](https://pr-agent-docs.codium.ai/tools/improve/) Code suggestions for improving the PR - [ ] Run
[UPDATE CHANGELOG](https://pr-agent-docs.codium.ai/tools/update_changelog/) Automatically updates the changelog - [ ] Run
[ADD DOCS](https://pr-agent-docs.codium.ai/tools/documentation/) πŸ’Ž Generates documentation to methods/functions/classes that changed in the PR - [ ] Run
[TEST](https://pr-agent-docs.codium.ai/tools/test/) πŸ’Ž Generates unit tests for a specific component, based on the PR code change - [ ] Run
[IMPROVE COMPONENT](https://pr-agent-docs.codium.ai/tools/improve_component/) πŸ’Ž Code suggestions for a specific component that changed in the PR - [ ] Run
[ANALYZE](https://pr-agent-docs.codium.ai/tools/analyze/) πŸ’Ž Identifies code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component - [ ] Run
[ASK](https://pr-agent-docs.codium.ai/tools/ask/) Answering free-text questions about the PR [*]
[GENERATE CUSTOM LABELS](https://pr-agent-docs.codium.ai/tools/custom_labels/) πŸ’Ž Generates custom labels for the PR, based on specific guidelines defined by the user [*]
[CI FEEDBACK](https://pr-agent-docs.codium.ai/tools/ci_feedback/) πŸ’Ž Generates feedback and analysis for a failed CI job [*]
[CUSTOM PROMPT](https://pr-agent-docs.codium.ai/tools/custom_prompt/) πŸ’Ž Generates custom suggestions for improving the PR code, derived only from a specific guidelines prompt defined by the user [*]
[SIMILAR ISSUE](https://pr-agent-docs.codium.ai/tools/similar_issues/) Automatically retrieves and presents similar issues [*]

(1) Note that each tool be triggered automatically when a new PR is opened, or called manually by commenting on a PR.

(2) Tools marked with [*] require additional parameters to be passed. For example, to invoke the /ask tool, you need to comment on a PR: /ask "<question content>". See the relevant documentation for each tool for more details.

ajitesh123 commented 2 months ago

/help

archie-ai-code-explain-pr-review[bot] commented 2 months ago

Archie AI Commands

Here are the available commands you can use:

You can use either the @archieai prefix or the shorter / prefix for each command. For example, both @archieai ask and /ask will work.

Additionally, in review comments on specific lines of code:

codiumai-pr-agent-pro[bot] commented 2 months ago

PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

PR Agent Walkthrough πŸ€–

Welcome to the PR Agent, an AI-powered tool for automated pull request analysis, feedback, suggestions and more.

Here is a list of tools you can use to interact with the PR Agent:

ToolDescriptionTrigger Interactively :gem:
[DESCRIBE](https://pr-agent-docs.codium.ai/tools/describe/) Generates PR description - title, type, summary, code walkthrough and labels - [ ] Run
[REVIEW](https://pr-agent-docs.codium.ai/tools/review/) Adjustable feedback about the PR, possible issues, security concerns, review effort and more - [ ] Run
[IMPROVE](https://pr-agent-docs.codium.ai/tools/improve/) Code suggestions for improving the PR - [ ] Run
[UPDATE CHANGELOG](https://pr-agent-docs.codium.ai/tools/update_changelog/) Automatically updates the changelog - [ ] Run
[ADD DOCS](https://pr-agent-docs.codium.ai/tools/documentation/) πŸ’Ž Generates documentation to methods/functions/classes that changed in the PR - [ ] Run
[TEST](https://pr-agent-docs.codium.ai/tools/test/) πŸ’Ž Generates unit tests for a specific component, based on the PR code change - [ ] Run
[IMPROVE COMPONENT](https://pr-agent-docs.codium.ai/tools/improve_component/) πŸ’Ž Code suggestions for a specific component that changed in the PR - [ ] Run
[ANALYZE](https://pr-agent-docs.codium.ai/tools/analyze/) πŸ’Ž Identifies code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component - [ ] Run
[ASK](https://pr-agent-docs.codium.ai/tools/ask/) Answering free-text questions about the PR [*]
[GENERATE CUSTOM LABELS](https://pr-agent-docs.codium.ai/tools/custom_labels/) πŸ’Ž Generates custom labels for the PR, based on specific guidelines defined by the user [*]
[CI FEEDBACK](https://pr-agent-docs.codium.ai/tools/ci_feedback/) πŸ’Ž Generates feedback and analysis for a failed CI job [*]
[CUSTOM PROMPT](https://pr-agent-docs.codium.ai/tools/custom_prompt/) πŸ’Ž Generates custom suggestions for improving the PR code, derived only from a specific guidelines prompt defined by the user [*]
[SIMILAR ISSUE](https://pr-agent-docs.codium.ai/tools/similar_issues/) Automatically retrieves and presents similar issues [*]

(1) Note that each tool be triggered automatically when a new PR is opened, or called manually by commenting on a PR.

(2) Tools marked with [*] require additional parameters to be passed. For example, to invoke the /ask tool, you need to comment on a PR: /ask "<question content>". See the relevant documentation for each tool for more details.