Codium-ai / cover-agent

QodoAI Cover-Agent: An AI-Powered Tool for Automated Test Generation and Code Coverage Enhancement! πŸ’»πŸ€–πŸ§ͺ🐞
https://qodo.ai/
GNU Affero General Public License v3.0
4.37k stars 327 forks source link

full support of o1 models + Update LLM dependencies in pyproject.toml to latest versions #198

Closed mrT23 closed 3 weeks ago

mrT23 commented 3 weeks ago

User description

for o1 models we should turn off stream always

also update litellm dependencies (and added the past fix)


PR Type

enhancement, bug_fix, dependencies


Description


PRDescriptionHeader.CHANGES_WALKTHROUGH

Relevant files
Enhancement
AICaller.py
Adjust streaming and parameters for o1 models                       

cover_agent/AICaller.py
  • Set stream to False for "o1-preview" and "o1-mini" models.
  • Removed max_tokens from completion_params for these models.
  • +1/-0     
    UnitTestGenerator.py
    Remove unnecessary model condition in UnitTestGenerator   

    cover_agent/UnitTestGenerator.py - Removed condition setting model to "gpt-4o" for specific models.
    +0/-2     
    Tests
    test_AICaller.py
    Update test assertion for exception message                           

    tests/test_AICaller.py - Updated test assertion for exception message.
    +2/-1     
    Dependencies
    pyproject.toml
    Update LLM dependencies to latest versions                             

    pyproject.toml
  • Updated litellm dependency to a specific Git repository.
  • Updated openai and tiktoken dependencies to newer versions.
  • +3/-3     

    πŸ’‘ PR-Agent usage: Comment /help "your question" on any pull request to receive relevant information

    codiumai-pr-agent-pro[bot] commented 3 weeks ago

    PR Reviewer Guide πŸ”

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 πŸ”΅πŸ”΅βšͺβšͺβšͺ
    πŸ§ͺ PR contains tests
    πŸ”’ No security concerns identified
    ⚑ Recommended focus areas for review

    Potential Inconsistency
    The `stream` parameter is set to False twice for o1 models. Verify if this redundancy is necessary or if it can be simplified. Possible Oversight
    The removal of the model-specific condition for setting the model to "gpt-4o" might affect the functionality. Ensure this change doesn't impact the expected behavior. Test Modification
    The assertion for the error message has been changed and a comment suggests it might vary. Verify if this change accurately reflects the expected behavior across different versions.
    codiumai-pr-agent-pro[bot] commented 3 weeks ago

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Score
    Best practice
    Use a more flexible assertion for error messages in tests ___ **Consider using a more generic assertion for the error message to make the test more
    robust against minor changes in error reporting.** [tests/test_AICaller.py [54]](https://github.com/Codium-ai/cover-agent/pull/198/files#diff-e84aeed71db4a60462435bc58814bc055ba7779343c529c6531763006007a961R54-R54) ```diff -assert str(exc_info.value) == "'NoneType' object is not subscriptable" # this error message might change for different versions of litellm +assert "NoneType" in str(exc_info.value) and "subscriptable" in str(exc_info.value) ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 7 Why: The suggestion to use a more generic assertion makes the test more robust against changes in error messages, enhancing the maintainability and reliability of the test suite.
    7
    Enhancement
    Consolidate redundant comments to improve code readability ___ **Consider consolidating the two statements about streaming for o1 models into a
    single comment to avoid redundancy.** [cover_agent/AICaller.py [61-66]](https://github.com/Codium-ai/cover-agent/pull/198/files#diff-d94557e610fe5a4d747c06e8ac8d6a0e243e5c50b4df8f624fa1acc7d51f57b0R61-R66) ```diff if self.model in ["o1-preview", "o1-mini"]: - stream = False # o1 doesn't support streaming + # o1 models don't support streaming + stream = False completion_params["temperature"] = 1 - completion_params["stream"] = False # o1 doesn't support streaming + completion_params["stream"] = False completion_params["max_completion_tokens"] = max_tokens completion_params.pop("max_tokens", None) # Remove 'max_tokens' if present ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 5 Why: The suggestion to consolidate comments improves code readability by reducing redundancy. However, it does not address any functional issues or bugs, so its impact is moderate.
    5

    πŸ’‘ Need additional feedback ? start a PR chat

    EmbeddedDevops1 commented 3 weeks ago

    βŒ› Running regression testing: https://github.com/Codium-ai/cover-agent/actions/runs/11608162470