ryansurf / cli-surf

Get surf and ocean data from the command line interface
MIT License
18 stars 30 forks source link

Revise GPT prompt #147

Closed macnult closed 1 month ago

macnult commented 1 month ago

General:

Code:

[✓] Does your submission pass tests?

Documentation:

Revised the GPT prompt in response to this reported issue: https://github.com/ryansurf/cli-surf/issues/146 I also added to the assertion to see what's returned if the test fails.

Summary by Sourcery

Revise the GPT prompt in the test to ensure precise response matching and enhance the assertion to provide detailed feedback on test failures.

Enhancements:

Tests:

sourcery-ai[bot] commented 1 month ago

Reviewer's Guide by Sourcery

This pull request revises the GPT prompt in the test_gpt.py file to address a reported issue. The changes include updating the test prompt for more precise output and enhancing the assertion message for better error reporting.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Revised GPT prompt for more precise output
  • Updated the surf_summary variable with a more specific instruction
  • Changed from 'Please only output: 'gpt works!'' to 'Please respond with the exact phrase 'gpt works'. Do not include any additional text or context.'
tests/test_gpt.py
Enhanced assertion message for better error reporting
  • Modified the assert statement to include the actual response in case of failure
  • Added f-string to display the unexpected response: f"Expected 'gpt works', but got: {gpt_response}"
tests/test_gpt.py

Tips and commands #### Interacting with Sourcery - **Trigger a new review:** Comment `@sourcery-ai review` on the pull request. - **Continue discussions:** Reply directly to Sourcery's review comments. - **Generate a GitHub issue from a review comment:** Ask Sourcery to create an issue from a review comment by replying to it. - **Generate a pull request title:** Write `@sourcery-ai` anywhere in the pull request title to generate a title at any time. - **Generate a pull request summary:** Write `@sourcery-ai summary` anywhere in the pull request body to generate a PR summary at any time. You can also use this command to specify where the summary should be inserted. #### Customizing Your Experience Access your [dashboard](https://app.sourcery.ai) to: - Enable or disable review features such as the Sourcery-generated pull request summary, the reviewer's guide, and others. - Change the review language. - Add, remove or edit custom review instructions. - Adjust other review settings. #### Getting Help - [Contact our support team](mailto:support@sourcery.ai) for questions or feedback. - Visit our [documentation](https://docs.sourcery.ai) for detailed guides and information. - Keep in touch with the Sourcery team by following us on [X/Twitter](https://x.com/SourceryAI), [LinkedIn](https://www.linkedin.com/company/sourcery-ai/) or [GitHub](https://github.com/sourcery-ai).
macnult commented 1 month ago

The test still passes/fails inconsistently. When it does fail, it returns the following:


assert gpt_response == expected_response, f"Expected '{expected_response}', but got: {gpt_response}" E AssertionError: Expected 'gpt works', but got: I'm here to help with any inquiries you may have. How can I assist you today? E assert "I'm here to ...st you today?" == 'gpt works' E E - gpt works E + I'm here to help with any inquiries you may have. How can I assist you today?

tests\test_gpt.py:22: AssertionError


The same thing happens with test_helper.py, although not as frequent. Locally I changed gpt_prompt in test_helper.py which helps during some tests, but isn't consistent to where I can call it resolved.

ryansurf commented 1 month ago

The test still passes/fails inconsistently. When it does fail, it returns the following:

assert gpt_response == expected_response, f"Expected '{expected_response}', but got: {gpt_response}" E AssertionError: Expected 'gpt works', but got: I'm here to help with any inquiries you may have. How can I assist you today? E assert "I'm here to ...st you today?" == 'gpt works' E E - gpt works E + I'm here to help with any inquiries you may have. How can I assist you today?

tests\test_gpt.py:22: AssertionError

The same thing happens with test_helper.py, although not as frequent. Locally I changed gpt_prompt in test_helper.py which helps during some tests, but isn't consistent to where I can call it resolved.

The test still passes/fails inconsistently. When it does fail, it returns the following:

assert gpt_response == expected_response, f"Expected '{expected_response}', but got: {gpt_response}" E AssertionError: Expected 'gpt works', but got: I'm here to help with any inquiries you may have. How can I assist you today? E assert "I'm here to ...st you today?" == 'gpt works' E E - gpt works E + I'm here to help with any inquiries you may have. How can I assist you today?

tests\test_gpt.py:22: AssertionError

The same thing happens with test_helper.py, although not as frequent. Locally I changed gpt_prompt in test_helper.py which helps during some tests, but isn't consistent to where I can call it resolved.

The gpt is definitely finicky, but your prompt seems to be the better choice. Might have to rethink this going forward but I'll merge your PR as its an improvement!

codecov[bot] commented 1 month ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

ryansurf commented 1 month ago

@macnult whoops, actually the linter failed. Can you run make lint to fix the error?

macnult commented 1 month ago

@ryansurf Sure thing! Should be good to go now.

ryansurf commented 1 month ago

@all-contributors please add @macnult for code

allcontributors[bot] commented 1 month ago

@ryansurf

I've put up a pull request to add @macnult! :tada: