Data-Simply / pyretailscience

pyretailscience - A data analysis and science toolkit for detail data
Other
5 stars 1 forks source link

feat: add standard histogram plot #85

Closed Mrglglglglgl closed 1 month ago

Mrglglglglgl commented 2 months ago

PR Type

Enhancement, Tests, Documentation


Description


Changes walkthrough πŸ“

Relevant files
Enhancement
histogram.py
Add histogram plotting functionality with customization options

pyretailscience/plots/histogram.py
  • Added a new module for creating histograms from pandas DataFrames or
    Series.
  • Supports single or multiple histograms, with optional grouping by a
    categorical column.
  • Includes range clipping and filling, and comprehensive customization
    options.
  • Assumes pre-aggregated data for plotting.
  • +266/-0 
    graph_utils.py
    Enhance graph styling for better visualization                     

    pyretailscience/style/graph_utils.py
  • Moved grid lines behind the plot for better visualization.
  • Improved legend handling for clarity.
  • +7/-5     
    Tests
    test_histogram.py
    Add tests for histogram plot functionality                             

    tests/plots/test_histogram.py
  • Added tests for the new histogram plot function.
  • Included tests for range clipping and filling.
  • Utilized fixtures for mocking and sample data.
  • +267/-0 
    Documentation
    analysis_modules.md
    Document histogram plot usage and features                             

    docs/analysis_modules.md
  • Documented the new histogram plot functionality.
  • Provided example usage and explanation of features.
  • +46/-0   
    histogram.md
    Add API documentation for histogram plot                                 

    docs/api/plots/histogram.md - Added API documentation for the histogram plot module.
    +3/-0     
    mkdocs.yml
    Update documentation navigation for histogram plot             

    mkdocs.yml
  • Updated navigation to include the new histogram plot documentation.
  • +2/-0     

    πŸ’‘ PR-Agent usage: Comment /help "your question" on any pull request to receive relevant information

    Summary by CodeRabbit

    coderabbitai[bot] commented 2 months ago

    [!IMPORTANT]

    Review skipped

    Review was skipped due to path filters

    :no_entry: Files ignored due to path filters (2) * `docs/assets/images/analysis_modules/plots/histogram_plot.svg` is excluded by `!**/*.svg` * `docs/assets/images/analysis_modules/plots/line_plot.svg` is excluded by `!**/*.svg`

    CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including **/dist/** will override the default block on the dist directory, by removing the pattern from both the lists.

    You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

    Walkthrough

    The pull request introduces a "Histogram Plot" feature in the pyretailscience library, enhancing both documentation and functionality for histogram visualization. It adds a dedicated module for histogram plotting, updates the documentation to include usage examples, and introduces unit tests to ensure robustness. Changes span multiple files, including documentation updates, new functions for histogram creation and customization, and improvements to existing plotting utilities.

    Changes

    Files Change Summary
    docs/analysis_modules.md New section on "Histogram Plot" added, including an overview, analysis types, and example usage.
    pyretailscience/plots/histogram.py New module for histogram plotting with functions for creating and customizing histograms.
    tests/plots/test_histogram.py New unit tests for histogram plotting functionality, covering various scenarios and edge cases.

    Possibly related PRs

    Suggested reviewers

    Poem

    🐰 In the garden of data, we hop with delight,
    New histograms bloom, a colorful sight.
    With axes so clear and legends that sing,
    Our plots tell a tale of each little thing.
    So gather your data, let’s plot and explore,
    With rabbits and histograms, who could ask for more? πŸ₯•


    Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

    ❀️ Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
    πŸͺ§ Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit , please review it.` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` - `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (Invoked using PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. ### Other keywords and placeholders - Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. - Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description. - Add `@coderabbitai` anywhere in the PR title to generate the title automatically. ### CodeRabbit Configuration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](http://discord.gg/coderabbit) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
    codiumai-pr-agent-pro[bot] commented 2 months ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    PR Reviewer Guide πŸ”

    ⏱️ Estimated effort to review: 4 πŸ”΅πŸ”΅πŸ”΅πŸ”΅βšͺ
    πŸ§ͺ PR contains tests
    πŸ”’ No security concerns identified
    ⚑ Key issues to review

    Performance Concern
    The `apply_range_clipping` function may be inefficient for large datasets as it creates a new DataFrame for each column. Consider using in-place operations or vectorized methods for better performance. Error Handling
    The function doesn't handle the case where `value_col` is an empty list. This could lead to unexpected behavior or errors. Test Coverage
    The tests don't cover all edge cases, such as empty DataFrames or Series, or invalid input types for `value_col` and `group_col`.
    codiumai-pr-agent-pro[bot] commented 2 months ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Score
    Enhancement
    Add option to normalize histogram for better distribution comparison ___ **Consider adding an option to normalize the histogram. This can be useful when
    comparing distributions with different sample sizes. You can implement this by
    adding a normalize parameter and using density=True in the histogram plot if
    normalization is requested.** [pyretailscience/plots/histogram.py [54-69]](https://github.com/Data-Simply/pyretailscience/pull/85/files#diff-4a692039d06850d25663d85a0b4364598e6f6bab7df1141397f0469052c3635dR54-R69) ```diff def plot( df: pd.DataFrame | pd.Series, value_col: str | list[str] | None = None, group_col: str | None = None, title: str | None = None, x_label: str | None = None, y_label: str | None = None, legend_title: str | None = None, ax: Axes | None = None, source_text: str | None = None, move_legend_outside: bool = False, range_lower: float | None = None, range_upper: float | None = None, range_method: Literal["clip", "fillna"] = "clip", + normalize: bool = False, **kwargs: dict[str, any], ) -> SubplotBase: + # ... (existing code) + kwargs['density'] = normalize + ax = _plot_histogram( + df=df, value_col=value_col, group_col=group_col, ax=ax, cmap=cmap, num_histograms=num_histograms, **kwargs + ) + + # ... (rest of the existing code) + ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 9 Why: Normalizing histograms is a valuable feature when comparing distributions with different sample sizes. This suggestion significantly enhances the functionality by providing an option to normalize the data, making comparisons more meaningful.
    9
    Simplify range clipping logic by using infinity values instead of None ___ **Consider using np.inf instead of None for range_upper and -np.inf for range_lower in
    the apply_range_clipping function. This would simplify the logic and make it more
    consistent, as you wouldn't need to check for None values separately.** [pyretailscience/plots/histogram.py [176-204]](https://github.com/Data-Simply/pyretailscience/pull/85/files#diff-4a692039d06850d25663d85a0b4364598e6f6bab7df1141397f0469052c3635dR176-R204) ```diff -if range_lower is not None or range_upper is not None: - if range_method == "clip": - return df.assign(**{col: df[col].clip(lower=range_lower, upper=range_upper) for col in value_col}) - return df.assign( - **{ - col: df[col].apply( - lambda x: np.nan - if (range_lower is not None and x < range_lower) or (range_upper is not None and x > range_upper) - else x - ) - for col in value_col - } - ) +range_lower = -np.inf if range_lower is None else range_lower +range_upper = np.inf if range_upper is None else range_upper +if range_method == "clip": + return df.assign(**{col: df[col].clip(lower=range_lower, upper=range_upper) for col in value_col}) +return df.assign( + **{ + col: df[col].apply( + lambda x: np.nan if x < range_lower or x > range_upper else x + ) + for col in value_col + } +) ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 8 Why: This suggestion simplifies the logic by using `np.inf` and `-np.inf` instead of checking for `None`, making the code cleaner and potentially reducing errors related to boundary conditions.
    8
    Implement data binning for more flexible and meaningful histogram creation ___ **Consider using pandas.cut() function to bin the data before plotting. This can be
    useful for creating more meaningful histograms, especially when dealing with
    continuous data or when you want to control the number of bins.** [pyretailscience/plots/histogram.py [228-266]](https://github.com/Data-Simply/pyretailscience/pull/85/files#diff-4a692039d06850d25663d85a0b4364598e6f6bab7df1141397f0469052c3635dR228-R266) ```diff def _plot_histogram( df: pd.DataFrame, value_col: list[str], group_col: str | None, ax: Axes | None, cmap: ListedColormap, num_histograms: int, + bins: int | list = 10, **kwargs: dict, ) -> Axes: add_legend = num_histograms > 1 if group_col is None: - return df[value_col].plot(kind="hist", ax=ax, alpha=0.5, legend=add_legend, color=cmap.colors[0], **kwargs) + for col in value_col: + df[f'{col}_binned'] = pd.cut(df[col], bins=bins) + return df[[f'{col}_binned' for col in value_col]].plot(kind="hist", ax=ax, alpha=0.5, legend=add_legend, color=cmap.colors[0], **kwargs) df_pivot = df.pivot(columns=group_col, values=value_col[0]) + for col in df_pivot.columns: + df_pivot[f'{col}_binned'] = pd.cut(df_pivot[col], bins=bins) # Plot all columns at once - return df_pivot.plot( + return df_pivot[[f'{col}_binned' for col in df_pivot.columns]].plot( kind="hist", ax=ax, alpha=0.5, legend=add_legend, color=cmap.colors[: len(df_pivot.columns)], # Use the appropriate number of colors **kwargs, ) ``` - [ ] **Apply this suggestion**
    Suggestion importance[1-10]: 7 Why: Using `pandas.cut()` for binning can enhance the flexibility and interpretability of histograms, especially for continuous data. This suggestion adds value by allowing more control over the histogram bins.
    7

    πŸ’‘ Need additional feedback ? start a PR chat

    codiumai-pr-agent-pro[bot] commented 2 months ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    CI Failure Feedback 🧐

    **Action:** Pre-Commit
    **Failed stage:** [Run Pre-commit](https://github.com/Data-Simply/pyretailscience/actions/runs/11066943589/job/30749236313) [❌]
    **Failed test name:** ruff
    **Failure summary:** The action failed because the ruff hook detected and fixed 21 errors related to missing trailing
    commas (COM812) in the following files:
  • pyretailscience/plots/histogram.py: 5 errors
  • tests/plots/test_histogram.py: 16 errors
    The ruff-format also failed as it reformatted 2 files. The
    process completed with exit code 1 due to these modifications.
  • Relevant error logs: ```yaml 1: ##[group]Operating System 2: Ubuntu ... 465: [INFO] Once installed this environment will be reused. 466: [INFO] This may take a few minutes... 467: [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 468: [INFO] Once installed this environment will be reused. 469: [INFO] This may take a few minutes... 470: [INFO] Installing environment for https://github.com/kynan/nbstripout. 471: [INFO] Once installed this environment will be reused. 472: [INFO] This may take a few minutes... 473: ruff.....................................................................Failed 474: - hook id: ruff 475: - files were modified by this hook 476: Fixed 21 errors: 477: - pyretailscience/plots/histogram.py: 478: 5 Γ— COM812 (missing-trailing-comma) 479: - tests/plots/test_histogram.py: 480: 16 Γ— COM812 (missing-trailing-comma) 481: Found 21 errors (21 fixed, 0 remaining). 482: ruff-format..............................................................Failed ... 486: 2 files reformatted, 25 files left unchanged 487: trim trailing whitespace.................................................Passed 488: fix end of files.........................................................Passed 489: fix python encoding pragma...............................................Passed 490: check yaml...............................................................Passed 491: debug statements (python)................................................Passed 492: pytest...................................................................Passed 493: nbstripout...............................................................Passed 494: ##[error]Process completed with exit code 1. ```

    ✨ CI feedback usage guide:
    The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
    codecov[bot] commented 2 months ago

    Codecov Report

    Attention: Patch coverage is 75.23810% with 26 lines in your changes missing coverage. Please review.

    Files with missing lines Patch % Lines
    pyretailscience/style/graph_utils.py 36.66% 18 Missing and 1 partial :warning:
    pyretailscience/plots/histogram.py 87.93% 3 Missing and 4 partials :warning:
    Files with missing lines Coverage Ξ”
    pyretailscience/plots/line.py 95.65% <100.00%> (ΓΈ)
    pyretailscience/style/tailwind.py 95.58% <100.00%> (+0.58%) :arrow_up:
    pyretailscience/plots/histogram.py 87.93% <87.93%> (ΓΈ)
    pyretailscience/style/graph_utils.py 67.70% <36.66%> (-14.39%) :arrow_down:
    codiumai-pr-agent-pro[bot] commented 2 months ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    CI Failure Feedback 🧐

    **Action:** Pre-Commit
    **Failed stage:** [Run Pre-commit](https://github.com/Data-Simply/pyretailscience/actions/runs/11103261931/job/30844831400) [❌]
    **Failed test name:** ruff-format
    **Failure summary:** The action failed because the ruff-format check did not pass. Specifically:
  • 1 file was reformatted, indicating that the code did not meet the formatting standards required by
    ruff-format.
  • The failure of this check resulted in the process completing with exit code 1.
  • Relevant error logs: ```yaml 1: ##[group]Operating System 2: Ubuntu ... 467: [INFO] This may take a few minutes... 468: [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 469: [INFO] Once installed this environment will be reused. 470: [INFO] This may take a few minutes... 471: [INFO] Installing environment for https://github.com/kynan/nbstripout. 472: [INFO] Once installed this environment will be reused. 473: [INFO] This may take a few minutes... 474: ruff.....................................................................Passed 475: ruff-format..............................................................Failed ... 479: 1 file reformatted, 26 files left unchanged 480: trim trailing whitespace.................................................Passed 481: fix end of files.........................................................Passed 482: fix python encoding pragma...............................................Passed 483: check yaml...............................................................Passed 484: debug statements (python)................................................Passed 485: pytest...................................................................Passed 486: nbstripout...............................................................Passed 487: ##[error]Process completed with exit code 1. ```

    ✨ CI feedback usage guide:
    The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
    codiumai-pr-agent-pro[bot] commented 2 months ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    CI Failure Feedback 🧐

    **Action:** Pre-Commit
    **Failed stage:** [Run Pre-commit](https://github.com/Data-Simply/pyretailscience/actions/runs/11128201459/job/30922371713) [❌]
    **Failed test name:** test_plot_single_histogram_no_legend
    **Failure summary:** The action failed due to multiple test failures:
  • The trim trailing whitespace pre-commit hook failed because it modified files to remove trailing
    whitespace.
  • Several tests in tests/plots/test_histogram.py failed due to AssertionError. The tests expected the
    plot function to be called with specific parameters, but the actual call included an unexpected
    alpha parameter and specific color values instead of a generic color placeholder.
  • Tests in tests/test_standard_graphs.py failed with a TypeError because the get_base_cmap function
    was called with an unexpected keyword argument num_colors.
  • Relevant error logs: ```yaml 1: ##[group]Operating System 2: Ubuntu ... 468: [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 469: [INFO] Once installed this environment will be reused. 470: [INFO] This may take a few minutes... 471: [INFO] Installing environment for https://github.com/kynan/nbstripout. 472: [INFO] Once installed this environment will be reused. 473: [INFO] This may take a few minutes... 474: ruff.....................................................................Passed 475: ruff-format..............................................................Passed 476: trim trailing whitespace.................................................Failed ... 478: - exit code: 1 479: - files were modified by this hook 480: Fixing docs/assets/images/analysis_modules/plots/line_plot.svg 481: Fixing docs/assets/images/analysis_modules/plots/histogram_plot.svg 482: fix end of files.........................................................Passed 483: fix python encoding pragma...............................................Passed 484: check yaml...............................................................Passed 485: debug statements (python)................................................Passed 486: pytest...................................................................Failed ... 496: tests/plots/test_line.py ......... [ 16%] 497: tests/test_cross_shop.py ........ [ 21%] 498: tests/test_gain_loss.py ...................... [ 37%] 499: tests/test_options.py ...................... [ 54%] 500: tests/test_product_association.py ............... [ 64%] 501: tests/test_range_planning.py ........ [ 70%] 502: tests/test_segmentation.py ...................... [ 86%] 503: tests/test_standard_graphs.py .....FFFFFF....... [100%] 504: =================================== FAILURES =================================== 505: _____________________ test_plot_single_histogram_no_legend _____________________ 506: self = , args = () 507: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 508: expected = call(kind='hist', ax=, legend=False, color=) 509: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 510: _error_message = ._error_message at 0x7feb06a2a7a0> 511: cause = None 512: def assert_called_with(self, /, *args, **kwargs): 513: """assert that the last call was made with the specified arguments. 514: Raises an AssertionError if the args and keyword args passed in are 515: different to the last call to the mock.""" 516: if self.call_args is None: 517: expected = self._format_mock_call_signature(args, kwargs) 518: actual = 'not called.' 519: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 520: % (expected, actual)) 521: raise AssertionError(error_message) 522: def _error_message(): 523: msg = self._format_mock_failure_message(args, kwargs) 524: return msg 525: expected = self._call_matcher(_Call((args, kwargs), two=True)) 526: actual = self._call_matcher(self.call_args) 527: if actual != expected: 528: cause = expected if isinstance(expected, Exception) else None 529: > raise AssertionError(_error_message()) from cause 530: E AssertionError: expected call not found. 531: E Expected: plot(kind='hist', ax=, legend=False, color=) 532: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 533: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 537: def assert_called_once_with(self, /, *args, **kwargs): 538: """assert that the mock was called exactly once and that that call was 539: with the specified arguments.""" 540: if not self.call_count == 1: 541: msg = ("Expected '%s' to be called once. Called %s times.%s" 542: % (self._mock_name or 'mock', 543: self.call_count, 544: self._calls_repr())) 545: raise AssertionError(msg) 546: > return self.assert_called_with(*args, **kwargs) 547: E AssertionError: expected call not found. ... 565: E ? ^^^^^^ 566: E + 'color': [ 567: E ? ^ 568: E + '#22c55e', 569: E + ], 570: E 'kind': 'hist', 571: E 'legend': False, 572: E } 573: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 583: def test_plot_single_histogram_no_legend(sample_dataframe, mocker): 584: """Test the plot function with a single histogram and no legend.""" 585: # Create the plot axis using plt.subplots() 586: _, ax = plt.subplots() 587: # Call the plot function 588: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax) 589: # Verify that df.plot was called with correct parameters 590: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 591: E AssertionError: expected call not found. ... 609: E ? ^^^^^^ 610: E + 'color': [ 611: E ? ^ 612: E + '#22c55e', 613: E + ], 614: E 'kind': 'hist', 615: E 'legend': False, 616: E } 617: tests/plots/test_histogram.py:72: AssertionError 618: _____ test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan ______ 619: self = , args = () 620: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': True} 621: expected = call(kind='hist', ax=, legend=True, color=) 622: actual = call(kind='hist', ax=, legend=True, alpha=0.5, color=['#22c55e', '#3b82f6']) 623: _error_message = ._error_message at 0x7feb0317cf40> 624: cause = None 625: def assert_called_with(self, /, *args, **kwargs): 626: """assert that the last call was made with the specified arguments. 627: Raises an AssertionError if the args and keyword args passed in are 628: different to the last call to the mock.""" 629: if self.call_args is None: 630: expected = self._format_mock_call_signature(args, kwargs) 631: actual = 'not called.' 632: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 633: % (expected, actual)) 634: raise AssertionError(error_message) 635: def _error_message(): 636: msg = self._format_mock_failure_message(args, kwargs) 637: return msg 638: expected = self._call_matcher(_Call((args, kwargs), two=True)) 639: actual = self._call_matcher(self.call_args) 640: if actual != expected: 641: cause = expected if isinstance(expected, Exception) else None 642: > raise AssertionError(_error_message()) from cause 643: E AssertionError: expected call not found. 644: E Expected: plot(kind='hist', ax=, legend=True, color=) 645: E Actual: plot(kind='hist', ax=, legend=True, alpha=0.5, color=['#22c55e', '#3b82f6']) 646: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 650: def assert_called_once_with(self, /, *args, **kwargs): 651: """assert that the mock was called exactly once and that that call was 652: with the specified arguments.""" 653: if not self.call_count == 1: 654: msg = ("Expected '%s' to be called once. Called %s times.%s" 655: % (self._mock_name or 'mock', 656: self.call_count, 657: self._calls_repr())) 658: raise AssertionError(msg) 659: > return self.assert_called_with(*args, **kwargs) 660: E AssertionError: expected call not found. ... 679: E + 'color': [ 680: E ? ^ 681: E + '#22c55e', 682: E + '#3b82f6', 683: E + ], 684: E 'kind': 'hist', 685: E 'legend': True, 686: E } 687: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 697: def test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan(sample_dataframe, mocker): 698: """Test the plot function with multiple histograms and a legend when group_col is not None.""" 699: # Create the plot axis using plt.subplots() 700: _, ax = plt.subplots() 701: # Call the plot function with grouping 702: resulted_ax = plot(df=sample_dataframe, value_col="quantity", group_col="category", ax=ax) 703: # Verify that df.plot was called for multiple histograms with correct parameters 704: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=True, color=mocker.ANY) 705: E AssertionError: expected call not found. ... 724: E + 'color': [ 725: E ? ^ 726: E + '#22c55e', 727: E + '#3b82f6', 728: E + ], 729: E 'kind': 'hist', 730: E 'legend': True, 731: E } 732: tests/plots/test_histogram.py:98: AssertionError 733: ______ test_plot_multiple_histograms_with_legend_when_value_col_is_a_list ______ 734: self = , args = () 735: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': True} 736: expected = call(kind='hist', ax=, legend=True, color=) 737: actual = call(kind='hist', ax=, legend=True, color=['#22c55e', '#3b82f6'], alpha=0.5) 738: _error_message = ._error_message at 0x7feb0315b420> 739: cause = None 740: def assert_called_with(self, /, *args, **kwargs): 741: """assert that the last call was made with the specified arguments. 742: Raises an AssertionError if the args and keyword args passed in are 743: different to the last call to the mock.""" 744: if self.call_args is None: 745: expected = self._format_mock_call_signature(args, kwargs) 746: actual = 'not called.' 747: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 748: % (expected, actual)) 749: raise AssertionError(error_message) 750: def _error_message(): 751: msg = self._format_mock_failure_message(args, kwargs) 752: return msg 753: expected = self._call_matcher(_Call((args, kwargs), two=True)) 754: actual = self._call_matcher(self.call_args) 755: if actual != expected: 756: cause = expected if isinstance(expected, Exception) else None 757: > raise AssertionError(_error_message()) from cause 758: E AssertionError: expected call not found. 759: E Expected: plot(kind='hist', ax=, legend=True, color=) 760: E Actual: plot(kind='hist', ax=, legend=True, color=['#22c55e', '#3b82f6'], alpha=0.5) 761: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 765: def assert_called_once_with(self, /, *args, **kwargs): 766: """assert that the mock was called exactly once and that that call was 767: with the specified arguments.""" 768: if not self.call_count == 1: 769: msg = ("Expected '%s' to be called once. Called %s times.%s" 770: % (self._mock_name or 'mock', 771: self.call_count, 772: self._calls_repr())) 773: raise AssertionError(msg) 774: > return self.assert_called_with(*args, **kwargs) 775: E AssertionError: expected call not found. ... 794: E + 'color': [ 795: E ? ^ 796: E + '#22c55e', 797: E + '#3b82f6', 798: E + ], 799: E 'kind': 'hist', 800: E 'legend': True, 801: E } 802: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 812: def test_plot_multiple_histograms_with_legend_when_value_col_is_a_list(sample_dataframe, mocker): 813: """Test the plot function with multiple histograms and a legend when value_col is a list.""" 814: # Create the plot axis using plt.subplots() 815: _, ax = plt.subplots() 816: # Call the plot function with grouping 817: resulted_ax = plot(df=sample_dataframe, value_col=["quantity", "category"], ax=ax) 818: # Verify that df.plot was called for multiple histograms with correct parameters 819: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=True, color=mocker.ANY) 820: E AssertionError: expected call not found. ... 839: E + 'color': [ 840: E ? ^ 841: E + '#22c55e', 842: E + '#3b82f6', 843: E + ], 844: E 'kind': 'hist', 845: E 'legend': True, 846: E } 847: tests/plots/test_histogram.py:124: AssertionError 848: __________________________ test_plot_with_source_text __________________________ 849: self = , args = () 850: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 851: expected = call(kind='hist', ax=, legend=False, color=) 852: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 853: _error_message = ._error_message at 0x7feb01f0c7c0> 854: cause = None 855: def assert_called_with(self, /, *args, **kwargs): 856: """assert that the last call was made with the specified arguments. 857: Raises an AssertionError if the args and keyword args passed in are 858: different to the last call to the mock.""" 859: if self.call_args is None: 860: expected = self._format_mock_call_signature(args, kwargs) 861: actual = 'not called.' 862: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 863: % (expected, actual)) 864: raise AssertionError(error_message) 865: def _error_message(): 866: msg = self._format_mock_failure_message(args, kwargs) 867: return msg 868: expected = self._call_matcher(_Call((args, kwargs), two=True)) 869: actual = self._call_matcher(self.call_args) 870: if actual != expected: 871: cause = expected if isinstance(expected, Exception) else None 872: > raise AssertionError(_error_message()) from cause 873: E AssertionError: expected call not found. 874: E Expected: plot(kind='hist', ax=, legend=False, color=) 875: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 876: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 880: def assert_called_once_with(self, /, *args, **kwargs): 881: """assert that the mock was called exactly once and that that call was 882: with the specified arguments.""" 883: if not self.call_count == 1: 884: msg = ("Expected '%s' to be called once. Called %s times.%s" 885: % (self._mock_name or 'mock', 886: self.call_count, 887: self._calls_repr())) 888: raise AssertionError(msg) 889: > return self.assert_called_with(*args, **kwargs) 890: E AssertionError: expected call not found. ... 908: E ? ^^^^^^ 909: E + 'color': [ 910: E ? ^ 911: E + '#22c55e', 912: E + ], 913: E 'kind': 'hist', 914: E 'legend': False, 915: E } 916: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 926: def test_plot_with_source_text(sample_dataframe, mocker): 927: """Test the plot function with source text.""" 928: # Create the plot axis using plt.subplots() 929: _, ax = plt.subplots() 930: # Call the plot function with source text 931: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax, source_text="Source: Test Data") 932: # Verify that df.plot was called with correct parameters 933: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 934: E AssertionError: expected call not found. ... 952: E ? ^^^^^^ 953: E + 'color': [ 954: E ? ^ 955: E + '#22c55e', 956: E + ], 957: E 'kind': 'hist', 958: E 'legend': False, 959: E } 960: tests/plots/test_histogram.py:150: AssertionError 961: ___________________________ test_plot_custom_labels ____________________________ 962: self = , args = () 963: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 964: expected = call(kind='hist', ax=, legend=False, color=) 965: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 966: _error_message = ._error_message at 0x7feb01ff0180> 967: cause = None 968: def assert_called_with(self, /, *args, **kwargs): 969: """assert that the last call was made with the specified arguments. 970: Raises an AssertionError if the args and keyword args passed in are 971: different to the last call to the mock.""" 972: if self.call_args is None: 973: expected = self._format_mock_call_signature(args, kwargs) 974: actual = 'not called.' 975: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 976: % (expected, actual)) 977: raise AssertionError(error_message) 978: def _error_message(): 979: msg = self._format_mock_failure_message(args, kwargs) 980: return msg 981: expected = self._call_matcher(_Call((args, kwargs), two=True)) 982: actual = self._call_matcher(self.call_args) 983: if actual != expected: 984: cause = expected if isinstance(expected, Exception) else None 985: > raise AssertionError(_error_message()) from cause 986: E AssertionError: expected call not found. 987: E Expected: plot(kind='hist', ax=, legend=False, color=) 988: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 989: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 993: def assert_called_once_with(self, /, *args, **kwargs): 994: """assert that the mock was called exactly once and that that call was 995: with the specified arguments.""" 996: if not self.call_count == 1: 997: msg = ("Expected '%s' to be called once. Called %s times.%s" 998: % (self._mock_name or 'mock', 999: self.call_count, 1000: self._calls_repr())) 1001: raise AssertionError(msg) 1002: > return self.assert_called_with(*args, **kwargs) 1003: E AssertionError: expected call not found. ... 1021: E ? ^^^^^^ 1022: E + 'color': [ 1023: E ? ^ 1024: E + '#22c55e', 1025: E + ], 1026: E 'kind': 'hist', 1027: E 'legend': False, 1028: E } 1029: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 1039: def test_plot_custom_labels(sample_dataframe, mocker): 1040: """Test the plot function with custom x and y labels.""" 1041: # Create the plot axis using plt.subplots() 1042: _, ax = plt.subplots() 1043: # Call the plot function with custom labels for x and y axes 1044: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax, x_label="Custom X", y_label="Custom Y") 1045: # Verify that df.plot was called with correct parameters 1046: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 1047: E AssertionError: expected call not found. ... 1065: E ? ^^^^^^ 1066: E + 'color': [ 1067: E ? ^ 1068: E + '#22c55e', 1069: E + ], 1070: E 'kind': 'hist', 1071: E 'legend': False, 1072: E } 1073: tests/plots/test_histogram.py:176: AssertionError 1074: ____________________________ test_plot_with_series _____________________________ 1075: self = , args = () 1076: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 1077: expected = call(kind='hist', ax=, legend=False, color=) 1078: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 1079: _error_message = ._error_message at 0x7feb01f4d6c0> 1080: cause = None 1081: def assert_called_with(self, /, *args, **kwargs): 1082: """assert that the last call was made with the specified arguments. 1083: Raises an AssertionError if the args and keyword args passed in are 1084: different to the last call to the mock.""" 1085: if self.call_args is None: 1086: expected = self._format_mock_call_signature(args, kwargs) 1087: actual = 'not called.' 1088: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 1089: % (expected, actual)) 1090: raise AssertionError(error_message) 1091: def _error_message(): 1092: msg = self._format_mock_failure_message(args, kwargs) 1093: return msg 1094: expected = self._call_matcher(_Call((args, kwargs), two=True)) 1095: actual = self._call_matcher(self.call_args) 1096: if actual != expected: 1097: cause = expected if isinstance(expected, Exception) else None 1098: > raise AssertionError(_error_message()) from cause 1099: E AssertionError: expected call not found. 1100: E Expected: plot(kind='hist', ax=, legend=False, color=) 1101: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 1102: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 1106: def assert_called_once_with(self, /, *args, **kwargs): 1107: """assert that the mock was called exactly once and that that call was 1108: with the specified arguments.""" 1109: if not self.call_count == 1: 1110: msg = ("Expected '%s' to be called once. Called %s times.%s" 1111: % (self._mock_name or 'mock', 1112: self.call_count, 1113: self._calls_repr())) 1114: raise AssertionError(msg) 1115: > return self.assert_called_with(*args, **kwargs) 1116: E AssertionError: expected call not found. ... 1134: E ? ^^^^^^ 1135: E + 'color': [ 1136: E ? ^ 1137: E + '#22c55e', 1138: E + ], 1139: E 'kind': 'hist', 1140: E 'legend': False, 1141: E } 1142: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 1152: def test_plot_with_series(sample_series, mocker): 1153: """Test the plot function with a pandas series.""" 1154: # Create the plot axis using plt.subplots() 1155: _, ax = plt.subplots() 1156: # Call the plot function with a series (instead of dataframe and value_col) 1157: resulted_ax = plot(df=sample_series, ax=ax) 1158: # Verify that pd.Series.plot was called with correct parameters 1159: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 1160: E AssertionError: expected call not found. ... 1178: E ? ^^^^^^ 1179: E + 'color': [ 1180: E ? ^ 1181: E + '#22c55e', 1182: E + ], 1183: E 'kind': 'hist', 1184: E 'legend': False, 1185: E } 1186: tests/plots/test_histogram.py:202: AssertionError 1187: ___________________ test_get_base_cmap_three_or_fewer_colors ___________________ 1188: def test_get_base_cmap_three_or_fewer_colors(): 1189: """Test the get_base_cmap function with three or fewer colors.""" 1190: # Test with 3 colors (all green shades) 1191: > gen = get_base_cmap(num_colors=3) 1192: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1193: tests/test_standard_graphs.py:135: TypeError 1194: ________________________ test_get_base_cmap_two_colors _________________________ 1195: def test_get_base_cmap_two_colors(): 1196: """Test the get_base_cmap function with two colors.""" 1197: # Test with 2 colors (only green 500 and green 300) 1198: > gen = get_base_cmap(num_colors=2) 1199: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1200: tests/test_standard_graphs.py:148: TypeError 1201: _________________________ test_get_base_cmap_one_color _________________________ 1202: def test_get_base_cmap_one_color(): 1203: """Test the get_base_cmap function with one color.""" 1204: # Test with 1 color (only green 500) 1205: > gen = get_base_cmap(num_colors=1) 1206: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1207: tests/test_standard_graphs.py:161: TypeError 1208: __________________ test_get_base_cmap_more_than_three_colors ___________________ 1209: def test_get_base_cmap_more_than_three_colors(): 1210: """Test the get_base_cmap function with more than three colors.""" 1211: # Test with 4 colors (mix of all colors) 1212: > gen = get_base_cmap(num_colors=4) 1213: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1214: tests/test_standard_graphs.py:174: TypeError 1215: ________________ test_get_base_cmap_more_than_available_colors _________________ 1216: def test_get_base_cmap_more_than_available_colors(): 1217: """Test the get_base_cmap function with more colors than available.""" 1218: # Test with more colors than available, ensure cycling occurs 1219: > gen = get_base_cmap(num_colors=9) 1220: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1221: tests/test_standard_graphs.py:192: TypeError 1222: ______________________ test_get_base_cmap_cycle_behavior _______________________ 1223: def test_get_base_cmap_cycle_behavior(): 1224: """Test the cycling behavior of the get_base_cmap function.""" 1225: # Test with cycling colors 1226: > gen = get_base_cmap(num_colors=3) 1227: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1228: tests/test_standard_graphs.py:215: TypeError ... 1270: pyretailscience/standard_graphs.py 130 75 70 3 38% 67-119, 245-320, 376->379, 418-426, 436 1271: pyretailscience/style/graph_utils.py 90 18 28 8 73% 16-17, 63-64, 80-86, 125, 132->140, 140->148, 170-178, 193, 208->214, 213 1272: pyretailscience/style/tailwind.py 68 2 10 1 96% 316-317 1273: ------------------------------------------------------------------------------------ 1274: TOTAL 1041 431 438 28 57% 1275: 3 files skipped due to complete coverage. 1276: Coverage XML written to file coverage.xml 1277: =========================== short test summary info ============================ 1278: FAILED tests/plots/test_histogram.py::test_plot_single_histogram_no_legend - AssertionError: expected call not found. ... 1292: ? ^^^^^^ 1293: + 'color': [ 1294: ? ^ 1295: + '#22c55e', 1296: + ], 1297: 'kind': 'hist', 1298: 'legend': False, 1299: } 1300: FAILED tests/plots/test_histogram.py::test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan - AssertionError: expected call not found. ... 1315: + 'color': [ 1316: ? ^ 1317: + '#22c55e', 1318: + '#3b82f6', 1319: + ], 1320: 'kind': 'hist', 1321: 'legend': True, 1322: } 1323: FAILED tests/plots/test_histogram.py::test_plot_multiple_histograms_with_legend_when_value_col_is_a_list - AssertionError: expected call not found. ... 1338: + 'color': [ 1339: ? ^ 1340: + '#22c55e', 1341: + '#3b82f6', 1342: + ], 1343: 'kind': 'hist', 1344: 'legend': True, 1345: } 1346: FAILED tests/plots/test_histogram.py::test_plot_with_source_text - AssertionError: expected call not found. ... 1360: ? ^^^^^^ 1361: + 'color': [ 1362: ? ^ 1363: + '#22c55e', 1364: + ], 1365: 'kind': 'hist', 1366: 'legend': False, 1367: } 1368: FAILED tests/plots/test_histogram.py::test_plot_custom_labels - AssertionError: expected call not found. ... 1382: ? ^^^^^^ 1383: + 'color': [ 1384: ? ^ 1385: + '#22c55e', 1386: + ], 1387: 'kind': 'hist', 1388: 'legend': False, 1389: } 1390: FAILED tests/plots/test_histogram.py::test_plot_with_series - AssertionError: expected call not found. ... 1404: ? ^^^^^^ 1405: + 'color': [ 1406: ? ^ 1407: + '#22c55e', 1408: + ], 1409: 'kind': 'hist', 1410: 'legend': False, 1411: } 1412: FAILED tests/test_standard_graphs.py::test_get_base_cmap_three_or_fewer_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1413: FAILED tests/test_standard_graphs.py::test_get_base_cmap_two_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1414: FAILED tests/test_standard_graphs.py::test_get_base_cmap_one_color - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1415: FAILED tests/test_standard_graphs.py::test_get_base_cmap_more_than_three_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1416: FAILED tests/test_standard_graphs.py::test_get_base_cmap_more_than_available_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1417: FAILED tests/test_standard_graphs.py::test_get_base_cmap_cycle_behavior - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1418: ================= 12 failed, 125 passed, 11 warnings in 4.59s ================== 1419: nbstripout...............................................................Passed 1420: ##[error]Process completed with exit code 1. ```

    ✨ CI feedback usage guide:
    The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
    codiumai-pr-agent-pro[bot] commented 2 months ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    CI Failure Feedback 🧐

    **Action:** Pre-Commit
    **Failed stage:** [Run Pre-commit](https://github.com/Data-Simply/pyretailscience/actions/runs/11128214192/job/30922469714) [❌]
    Relevant error logs: ```yaml 1: ##[group]Operating System 2: Ubuntu ... 468: [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 469: [INFO] Once installed this environment will be reused. 470: [INFO] This may take a few minutes... 471: [INFO] Installing environment for https://github.com/kynan/nbstripout. 472: [INFO] Once installed this environment will be reused. 473: [INFO] This may take a few minutes... 474: ruff.....................................................................Passed 475: ruff-format..............................................................Passed 476: trim trailing whitespace.................................................Failed ... 478: - exit code: 1 479: - files were modified by this hook 480: Fixing docs/assets/images/analysis_modules/plots/line_plot.svg 481: Fixing docs/assets/images/analysis_modules/plots/histogram_plot.svg 482: fix end of files.........................................................Passed 483: fix python encoding pragma...............................................Passed 484: check yaml...............................................................Passed 485: debug statements (python)................................................Passed 486: pytest...................................................................Failed ... 496: tests/plots/test_line.py ......... [ 16%] 497: tests/test_cross_shop.py ........ [ 21%] 498: tests/test_gain_loss.py ...................... [ 37%] 499: tests/test_options.py ...................... [ 54%] 500: tests/test_product_association.py ............... [ 64%] 501: tests/test_range_planning.py ........ [ 70%] 502: tests/test_segmentation.py ...................... [ 86%] 503: tests/test_standard_graphs.py .....FFFFFF....... [100%] 504: =================================== FAILURES =================================== 505: _____________________ test_plot_single_histogram_no_legend _____________________ 506: self = , args = () 507: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 508: expected = call(kind='hist', ax=, legend=False, color=) 509: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 510: _error_message = ._error_message at 0x7f367bdda7a0> 511: cause = None 512: def assert_called_with(self, /, *args, **kwargs): 513: """assert that the last call was made with the specified arguments. 514: Raises an AssertionError if the args and keyword args passed in are 515: different to the last call to the mock.""" 516: if self.call_args is None: 517: expected = self._format_mock_call_signature(args, kwargs) 518: actual = 'not called.' 519: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 520: % (expected, actual)) 521: raise AssertionError(error_message) 522: def _error_message(): 523: msg = self._format_mock_failure_message(args, kwargs) 524: return msg 525: expected = self._call_matcher(_Call((args, kwargs), two=True)) 526: actual = self._call_matcher(self.call_args) 527: if actual != expected: 528: cause = expected if isinstance(expected, Exception) else None 529: > raise AssertionError(_error_message()) from cause 530: E AssertionError: expected call not found. 531: E Expected: plot(kind='hist', ax=, legend=False, color=) 532: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 533: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 537: def assert_called_once_with(self, /, *args, **kwargs): 538: """assert that the mock was called exactly once and that that call was 539: with the specified arguments.""" 540: if not self.call_count == 1: 541: msg = ("Expected '%s' to be called once. Called %s times.%s" 542: % (self._mock_name or 'mock', 543: self.call_count, 544: self._calls_repr())) 545: raise AssertionError(msg) 546: > return self.assert_called_with(*args, **kwargs) 547: E AssertionError: expected call not found. ... 565: E ? ^^^^^^ 566: E + 'color': [ 567: E ? ^ 568: E + '#22c55e', 569: E + ], 570: E 'kind': 'hist', 571: E 'legend': False, 572: E } 573: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 583: def test_plot_single_histogram_no_legend(sample_dataframe, mocker): 584: """Test the plot function with a single histogram and no legend.""" 585: # Create the plot axis using plt.subplots() 586: _, ax = plt.subplots() 587: # Call the plot function 588: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax) 589: # Verify that df.plot was called with correct parameters 590: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 591: E AssertionError: expected call not found. ... 609: E ? ^^^^^^ 610: E + 'color': [ 611: E ? ^ 612: E + '#22c55e', 613: E + ], 614: E 'kind': 'hist', 615: E 'legend': False, 616: E } 617: tests/plots/test_histogram.py:72: AssertionError 618: _____ test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan ______ 619: self = , args = () 620: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': True} 621: expected = call(kind='hist', ax=, legend=True, color=) 622: actual = call(kind='hist', ax=, legend=True, alpha=0.5, color=['#22c55e', '#3b82f6']) 623: _error_message = ._error_message at 0x7f367bcb4f40> 624: cause = None 625: def assert_called_with(self, /, *args, **kwargs): 626: """assert that the last call was made with the specified arguments. 627: Raises an AssertionError if the args and keyword args passed in are 628: different to the last call to the mock.""" 629: if self.call_args is None: 630: expected = self._format_mock_call_signature(args, kwargs) 631: actual = 'not called.' 632: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 633: % (expected, actual)) 634: raise AssertionError(error_message) 635: def _error_message(): 636: msg = self._format_mock_failure_message(args, kwargs) 637: return msg 638: expected = self._call_matcher(_Call((args, kwargs), two=True)) 639: actual = self._call_matcher(self.call_args) 640: if actual != expected: 641: cause = expected if isinstance(expected, Exception) else None 642: > raise AssertionError(_error_message()) from cause 643: E AssertionError: expected call not found. 644: E Expected: plot(kind='hist', ax=, legend=True, color=) 645: E Actual: plot(kind='hist', ax=, legend=True, alpha=0.5, color=['#22c55e', '#3b82f6']) 646: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 650: def assert_called_once_with(self, /, *args, **kwargs): 651: """assert that the mock was called exactly once and that that call was 652: with the specified arguments.""" 653: if not self.call_count == 1: 654: msg = ("Expected '%s' to be called once. Called %s times.%s" 655: % (self._mock_name or 'mock', 656: self.call_count, 657: self._calls_repr())) 658: raise AssertionError(msg) 659: > return self.assert_called_with(*args, **kwargs) 660: E AssertionError: expected call not found. ... 679: E + 'color': [ 680: E ? ^ 681: E + '#22c55e', 682: E + '#3b82f6', 683: E + ], 684: E 'kind': 'hist', 685: E 'legend': True, 686: E } 687: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 697: def test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan(sample_dataframe, mocker): 698: """Test the plot function with multiple histograms and a legend when group_col is not None.""" 699: # Create the plot axis using plt.subplots() 700: _, ax = plt.subplots() 701: # Call the plot function with grouping 702: resulted_ax = plot(df=sample_dataframe, value_col="quantity", group_col="category", ax=ax) 703: # Verify that df.plot was called for multiple histograms with correct parameters 704: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=True, color=mocker.ANY) 705: E AssertionError: expected call not found. ... 724: E + 'color': [ 725: E ? ^ 726: E + '#22c55e', 727: E + '#3b82f6', 728: E + ], 729: E 'kind': 'hist', 730: E 'legend': True, 731: E } 732: tests/plots/test_histogram.py:98: AssertionError 733: ______ test_plot_multiple_histograms_with_legend_when_value_col_is_a_list ______ 734: self = , args = () 735: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': True} 736: expected = call(kind='hist', ax=, legend=True, color=) 737: actual = call(kind='hist', ax=, legend=True, color=['#22c55e', '#3b82f6'], alpha=0.5) 738: _error_message = ._error_message at 0x7f367bca3420> 739: cause = None 740: def assert_called_with(self, /, *args, **kwargs): 741: """assert that the last call was made with the specified arguments. 742: Raises an AssertionError if the args and keyword args passed in are 743: different to the last call to the mock.""" 744: if self.call_args is None: 745: expected = self._format_mock_call_signature(args, kwargs) 746: actual = 'not called.' 747: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 748: % (expected, actual)) 749: raise AssertionError(error_message) 750: def _error_message(): 751: msg = self._format_mock_failure_message(args, kwargs) 752: return msg 753: expected = self._call_matcher(_Call((args, kwargs), two=True)) 754: actual = self._call_matcher(self.call_args) 755: if actual != expected: 756: cause = expected if isinstance(expected, Exception) else None 757: > raise AssertionError(_error_message()) from cause 758: E AssertionError: expected call not found. 759: E Expected: plot(kind='hist', ax=, legend=True, color=) 760: E Actual: plot(kind='hist', ax=, legend=True, color=['#22c55e', '#3b82f6'], alpha=0.5) 761: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 765: def assert_called_once_with(self, /, *args, **kwargs): 766: """assert that the mock was called exactly once and that that call was 767: with the specified arguments.""" 768: if not self.call_count == 1: 769: msg = ("Expected '%s' to be called once. Called %s times.%s" 770: % (self._mock_name or 'mock', 771: self.call_count, 772: self._calls_repr())) 773: raise AssertionError(msg) 774: > return self.assert_called_with(*args, **kwargs) 775: E AssertionError: expected call not found. ... 794: E + 'color': [ 795: E ? ^ 796: E + '#22c55e', 797: E + '#3b82f6', 798: E + ], 799: E 'kind': 'hist', 800: E 'legend': True, 801: E } 802: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 812: def test_plot_multiple_histograms_with_legend_when_value_col_is_a_list(sample_dataframe, mocker): 813: """Test the plot function with multiple histograms and a legend when value_col is a list.""" 814: # Create the plot axis using plt.subplots() 815: _, ax = plt.subplots() 816: # Call the plot function with grouping 817: resulted_ax = plot(df=sample_dataframe, value_col=["quantity", "category"], ax=ax) 818: # Verify that df.plot was called for multiple histograms with correct parameters 819: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=True, color=mocker.ANY) 820: E AssertionError: expected call not found. ... 839: E + 'color': [ 840: E ? ^ 841: E + '#22c55e', 842: E + '#3b82f6', 843: E + ], 844: E 'kind': 'hist', 845: E 'legend': True, 846: E } 847: tests/plots/test_histogram.py:124: AssertionError 848: __________________________ test_plot_with_source_text __________________________ 849: self = , args = () 850: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 851: expected = call(kind='hist', ax=, legend=False, color=) 852: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 853: _error_message = ._error_message at 0x7f367830c7c0> 854: cause = None 855: def assert_called_with(self, /, *args, **kwargs): 856: """assert that the last call was made with the specified arguments. 857: Raises an AssertionError if the args and keyword args passed in are 858: different to the last call to the mock.""" 859: if self.call_args is None: 860: expected = self._format_mock_call_signature(args, kwargs) 861: actual = 'not called.' 862: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 863: % (expected, actual)) 864: raise AssertionError(error_message) 865: def _error_message(): 866: msg = self._format_mock_failure_message(args, kwargs) 867: return msg 868: expected = self._call_matcher(_Call((args, kwargs), two=True)) 869: actual = self._call_matcher(self.call_args) 870: if actual != expected: 871: cause = expected if isinstance(expected, Exception) else None 872: > raise AssertionError(_error_message()) from cause 873: E AssertionError: expected call not found. 874: E Expected: plot(kind='hist', ax=, legend=False, color=) 875: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 876: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 880: def assert_called_once_with(self, /, *args, **kwargs): 881: """assert that the mock was called exactly once and that that call was 882: with the specified arguments.""" 883: if not self.call_count == 1: 884: msg = ("Expected '%s' to be called once. Called %s times.%s" 885: % (self._mock_name or 'mock', 886: self.call_count, 887: self._calls_repr())) 888: raise AssertionError(msg) 889: > return self.assert_called_with(*args, **kwargs) 890: E AssertionError: expected call not found. ... 908: E ? ^^^^^^ 909: E + 'color': [ 910: E ? ^ 911: E + '#22c55e', 912: E + ], 913: E 'kind': 'hist', 914: E 'legend': False, 915: E } 916: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 926: def test_plot_with_source_text(sample_dataframe, mocker): 927: """Test the plot function with source text.""" 928: # Create the plot axis using plt.subplots() 929: _, ax = plt.subplots() 930: # Call the plot function with source text 931: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax, source_text="Source: Test Data") 932: # Verify that df.plot was called with correct parameters 933: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 934: E AssertionError: expected call not found. ... 952: E ? ^^^^^^ 953: E + 'color': [ 954: E ? ^ 955: E + '#22c55e', 956: E + ], 957: E 'kind': 'hist', 958: E 'legend': False, 959: E } 960: tests/plots/test_histogram.py:150: AssertionError 961: ___________________________ test_plot_custom_labels ____________________________ 962: self = , args = () 963: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 964: expected = call(kind='hist', ax=, legend=False, color=) 965: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 966: _error_message = ._error_message at 0x7f36783f0180> 967: cause = None 968: def assert_called_with(self, /, *args, **kwargs): 969: """assert that the last call was made with the specified arguments. 970: Raises an AssertionError if the args and keyword args passed in are 971: different to the last call to the mock.""" 972: if self.call_args is None: 973: expected = self._format_mock_call_signature(args, kwargs) 974: actual = 'not called.' 975: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 976: % (expected, actual)) 977: raise AssertionError(error_message) 978: def _error_message(): 979: msg = self._format_mock_failure_message(args, kwargs) 980: return msg 981: expected = self._call_matcher(_Call((args, kwargs), two=True)) 982: actual = self._call_matcher(self.call_args) 983: if actual != expected: 984: cause = expected if isinstance(expected, Exception) else None 985: > raise AssertionError(_error_message()) from cause 986: E AssertionError: expected call not found. 987: E Expected: plot(kind='hist', ax=, legend=False, color=) 988: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 989: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 993: def assert_called_once_with(self, /, *args, **kwargs): 994: """assert that the mock was called exactly once and that that call was 995: with the specified arguments.""" 996: if not self.call_count == 1: 997: msg = ("Expected '%s' to be called once. Called %s times.%s" 998: % (self._mock_name or 'mock', 999: self.call_count, 1000: self._calls_repr())) 1001: raise AssertionError(msg) 1002: > return self.assert_called_with(*args, **kwargs) 1003: E AssertionError: expected call not found. ... 1021: E ? ^^^^^^ 1022: E + 'color': [ 1023: E ? ^ 1024: E + '#22c55e', 1025: E + ], 1026: E 'kind': 'hist', 1027: E 'legend': False, 1028: E } 1029: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 1039: def test_plot_custom_labels(sample_dataframe, mocker): 1040: """Test the plot function with custom x and y labels.""" 1041: # Create the plot axis using plt.subplots() 1042: _, ax = plt.subplots() 1043: # Call the plot function with custom labels for x and y axes 1044: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax, x_label="Custom X", y_label="Custom Y") 1045: # Verify that df.plot was called with correct parameters 1046: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 1047: E AssertionError: expected call not found. ... 1065: E ? ^^^^^^ 1066: E + 'color': [ 1067: E ? ^ 1068: E + '#22c55e', 1069: E + ], 1070: E 'kind': 'hist', 1071: E 'legend': False, 1072: E } 1073: tests/plots/test_histogram.py:176: AssertionError 1074: ____________________________ test_plot_with_series _____________________________ 1075: self = , args = () 1076: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 1077: expected = call(kind='hist', ax=, legend=False, color=) 1078: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 1079: _error_message = ._error_message at 0x7f367834d6c0> 1080: cause = None 1081: def assert_called_with(self, /, *args, **kwargs): 1082: """assert that the last call was made with the specified arguments. 1083: Raises an AssertionError if the args and keyword args passed in are 1084: different to the last call to the mock.""" 1085: if self.call_args is None: 1086: expected = self._format_mock_call_signature(args, kwargs) 1087: actual = 'not called.' 1088: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 1089: % (expected, actual)) 1090: raise AssertionError(error_message) 1091: def _error_message(): 1092: msg = self._format_mock_failure_message(args, kwargs) 1093: return msg 1094: expected = self._call_matcher(_Call((args, kwargs), two=True)) 1095: actual = self._call_matcher(self.call_args) 1096: if actual != expected: 1097: cause = expected if isinstance(expected, Exception) else None 1098: > raise AssertionError(_error_message()) from cause 1099: E AssertionError: expected call not found. 1100: E Expected: plot(kind='hist', ax=, legend=False, color=) 1101: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 1102: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 1106: def assert_called_once_with(self, /, *args, **kwargs): 1107: """assert that the mock was called exactly once and that that call was 1108: with the specified arguments.""" 1109: if not self.call_count == 1: 1110: msg = ("Expected '%s' to be called once. Called %s times.%s" 1111: % (self._mock_name or 'mock', 1112: self.call_count, 1113: self._calls_repr())) 1114: raise AssertionError(msg) 1115: > return self.assert_called_with(*args, **kwargs) 1116: E AssertionError: expected call not found. ... 1134: E ? ^^^^^^ 1135: E + 'color': [ 1136: E ? ^ 1137: E + '#22c55e', 1138: E + ], 1139: E 'kind': 'hist', 1140: E 'legend': False, 1141: E } 1142: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 1152: def test_plot_with_series(sample_series, mocker): 1153: """Test the plot function with a pandas series.""" 1154: # Create the plot axis using plt.subplots() 1155: _, ax = plt.subplots() 1156: # Call the plot function with a series (instead of dataframe and value_col) 1157: resulted_ax = plot(df=sample_series, ax=ax) 1158: # Verify that pd.Series.plot was called with correct parameters 1159: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 1160: E AssertionError: expected call not found. ... 1178: E ? ^^^^^^ 1179: E + 'color': [ 1180: E ? ^ 1181: E + '#22c55e', 1182: E + ], 1183: E 'kind': 'hist', 1184: E 'legend': False, 1185: E } 1186: tests/plots/test_histogram.py:202: AssertionError 1187: ___________________ test_get_base_cmap_three_or_fewer_colors ___________________ 1188: def test_get_base_cmap_three_or_fewer_colors(): 1189: """Test the get_base_cmap function with three or fewer colors.""" 1190: # Test with 3 colors (all green shades) 1191: > gen = get_base_cmap(num_colors=3) 1192: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1193: tests/test_standard_graphs.py:135: TypeError 1194: ________________________ test_get_base_cmap_two_colors _________________________ 1195: def test_get_base_cmap_two_colors(): 1196: """Test the get_base_cmap function with two colors.""" 1197: # Test with 2 colors (only green 500 and green 300) 1198: > gen = get_base_cmap(num_colors=2) 1199: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1200: tests/test_standard_graphs.py:148: TypeError 1201: _________________________ test_get_base_cmap_one_color _________________________ 1202: def test_get_base_cmap_one_color(): 1203: """Test the get_base_cmap function with one color.""" 1204: # Test with 1 color (only green 500) 1205: > gen = get_base_cmap(num_colors=1) 1206: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1207: tests/test_standard_graphs.py:161: TypeError 1208: __________________ test_get_base_cmap_more_than_three_colors ___________________ 1209: def test_get_base_cmap_more_than_three_colors(): 1210: """Test the get_base_cmap function with more than three colors.""" 1211: # Test with 4 colors (mix of all colors) 1212: > gen = get_base_cmap(num_colors=4) 1213: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1214: tests/test_standard_graphs.py:174: TypeError 1215: ________________ test_get_base_cmap_more_than_available_colors _________________ 1216: def test_get_base_cmap_more_than_available_colors(): 1217: """Test the get_base_cmap function with more colors than available.""" 1218: # Test with more colors than available, ensure cycling occurs 1219: > gen = get_base_cmap(num_colors=9) 1220: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1221: tests/test_standard_graphs.py:192: TypeError 1222: ______________________ test_get_base_cmap_cycle_behavior _______________________ 1223: def test_get_base_cmap_cycle_behavior(): 1224: """Test the cycling behavior of the get_base_cmap function.""" 1225: # Test with cycling colors 1226: > gen = get_base_cmap(num_colors=3) 1227: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1228: tests/test_standard_graphs.py:215: TypeError ... 1270: pyretailscience/standard_graphs.py 130 75 70 3 38% 67-119, 245-320, 376->379, 418-426, 436 1271: pyretailscience/style/graph_utils.py 90 18 28 8 73% 16-17, 63-64, 80-86, 125, 132->140, 140->148, 170-178, 193, 208->214, 213 1272: pyretailscience/style/tailwind.py 68 2 10 1 96% 316-317 1273: ------------------------------------------------------------------------------------ 1274: TOTAL 1041 431 438 28 57% 1275: 3 files skipped due to complete coverage. 1276: Coverage XML written to file coverage.xml 1277: =========================== short test summary info ============================ 1278: FAILED tests/plots/test_histogram.py::test_plot_single_histogram_no_legend - AssertionError: expected call not found. ... 1292: ? ^^^^^^ 1293: + 'color': [ 1294: ? ^ 1295: + '#22c55e', 1296: + ], 1297: 'kind': 'hist', 1298: 'legend': False, 1299: } 1300: FAILED tests/plots/test_histogram.py::test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan - AssertionError: expected call not found. ... 1315: + 'color': [ 1316: ? ^ 1317: + '#22c55e', 1318: + '#3b82f6', 1319: + ], 1320: 'kind': 'hist', 1321: 'legend': True, 1322: } 1323: FAILED tests/plots/test_histogram.py::test_plot_multiple_histograms_with_legend_when_value_col_is_a_list - AssertionError: expected call not found. ... 1338: + 'color': [ 1339: ? ^ 1340: + '#22c55e', 1341: + '#3b82f6', 1342: + ], 1343: 'kind': 'hist', 1344: 'legend': True, 1345: } 1346: FAILED tests/plots/test_histogram.py::test_plot_with_source_text - AssertionError: expected call not found. ... 1360: ? ^^^^^^ 1361: + 'color': [ 1362: ? ^ 1363: + '#22c55e', 1364: + ], 1365: 'kind': 'hist', 1366: 'legend': False, 1367: } 1368: FAILED tests/plots/test_histogram.py::test_plot_custom_labels - AssertionError: expected call not found. ... 1382: ? ^^^^^^ 1383: + 'color': [ 1384: ? ^ 1385: + '#22c55e', 1386: + ], 1387: 'kind': 'hist', 1388: 'legend': False, 1389: } 1390: FAILED tests/plots/test_histogram.py::test_plot_with_series - AssertionError: expected call not found. ... 1404: ? ^^^^^^ 1405: + 'color': [ 1406: ? ^ 1407: + '#22c55e', 1408: + ], 1409: 'kind': 'hist', 1410: 'legend': False, 1411: } 1412: FAILED tests/test_standard_graphs.py::test_get_base_cmap_three_or_fewer_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1413: FAILED tests/test_standard_graphs.py::test_get_base_cmap_two_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1414: FAILED tests/test_standard_graphs.py::test_get_base_cmap_one_color - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1415: FAILED tests/test_standard_graphs.py::test_get_base_cmap_more_than_three_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1416: FAILED tests/test_standard_graphs.py::test_get_base_cmap_more_than_available_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1417: FAILED tests/test_standard_graphs.py::test_get_base_cmap_cycle_behavior - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1418: ================= 12 failed, 125 passed, 11 warnings in 4.74s ================== 1419: nbstripout...............................................................Passed 1420: ##[error]Process completed with exit code 1. ```

    ✨ CI feedback usage guide:
    The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
    codiumai-pr-agent-pro[bot] commented 2 months ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    CI Failure Feedback 🧐

    **Action:** Pre-Commit
    **Failed stage:** [Run Pre-commit](https://github.com/Data-Simply/pyretailscience/actions/runs/11140476098/job/30959319761) [❌]
    **Failed test name:** test_plot_single_histogram_no_legend
    **Failure summary:** The action failed due to multiple test failures:
  • The trim trailing whitespace pre-commit hook failed because it modified files to remove trailing
    whitespace.
  • Several tests in tests/plots/test_histogram.py failed due to an AssertionError. The tests expected
    the plot function to be called with specific parameters, but the actual call included an unexpected
    alpha parameter and specific color values.
  • Tests in tests/test_standard_graphs.py failed with a TypeError because the get_base_cmap function
    was called with an unexpected keyword argument num_colors.
  • Relevant error logs: ```yaml 1: ##[group]Operating System 2: Ubuntu ... 468: [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 469: [INFO] Once installed this environment will be reused. 470: [INFO] This may take a few minutes... 471: [INFO] Installing environment for https://github.com/kynan/nbstripout. 472: [INFO] Once installed this environment will be reused. 473: [INFO] This may take a few minutes... 474: ruff.....................................................................Passed 475: ruff-format..............................................................Passed 476: trim trailing whitespace.................................................Failed ... 478: - exit code: 1 479: - files were modified by this hook 480: Fixing docs/assets/images/analysis_modules/plots/line_plot.svg 481: Fixing docs/assets/images/analysis_modules/plots/histogram_plot.svg 482: fix end of files.........................................................Passed 483: fix python encoding pragma...............................................Passed 484: check yaml...............................................................Passed 485: debug statements (python)................................................Passed 486: pytest...................................................................Failed ... 496: tests/plots/test_line.py ......... [ 16%] 497: tests/test_cross_shop.py ........ [ 21%] 498: tests/test_gain_loss.py ...................... [ 37%] 499: tests/test_options.py ...................... [ 54%] 500: tests/test_product_association.py ............... [ 64%] 501: tests/test_range_planning.py ........ [ 70%] 502: tests/test_segmentation.py ...................... [ 86%] 503: tests/test_standard_graphs.py .....FFFFFF....... [100%] 504: =================================== FAILURES =================================== 505: _____________________ test_plot_single_histogram_no_legend _____________________ 506: self = , args = () 507: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 508: expected = call(kind='hist', ax=, legend=False, color=) 509: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 510: _error_message = ._error_message at 0x7f3e308de700> 511: cause = None 512: def assert_called_with(self, /, *args, **kwargs): 513: """assert that the last call was made with the specified arguments. 514: Raises an AssertionError if the args and keyword args passed in are 515: different to the last call to the mock.""" 516: if self.call_args is None: 517: expected = self._format_mock_call_signature(args, kwargs) 518: actual = 'not called.' 519: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 520: % (expected, actual)) 521: raise AssertionError(error_message) 522: def _error_message(): 523: msg = self._format_mock_failure_message(args, kwargs) 524: return msg 525: expected = self._call_matcher(_Call((args, kwargs), two=True)) 526: actual = self._call_matcher(self.call_args) 527: if actual != expected: 528: cause = expected if isinstance(expected, Exception) else None 529: > raise AssertionError(_error_message()) from cause 530: E AssertionError: expected call not found. 531: E Expected: plot(kind='hist', ax=, legend=False, color=) 532: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 533: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 537: def assert_called_once_with(self, /, *args, **kwargs): 538: """assert that the mock was called exactly once and that that call was 539: with the specified arguments.""" 540: if not self.call_count == 1: 541: msg = ("Expected '%s' to be called once. Called %s times.%s" 542: % (self._mock_name or 'mock', 543: self.call_count, 544: self._calls_repr())) 545: raise AssertionError(msg) 546: > return self.assert_called_with(*args, **kwargs) 547: E AssertionError: expected call not found. ... 565: E ? ^^^^^^ 566: E + 'color': [ 567: E ? ^ 568: E + '#22c55e', 569: E + ], 570: E 'kind': 'hist', 571: E 'legend': False, 572: E } 573: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 583: def test_plot_single_histogram_no_legend(sample_dataframe, mocker): 584: """Test the plot function with a single histogram and no legend.""" 585: # Create the plot axis using plt.subplots() 586: _, ax = plt.subplots() 587: # Call the plot function 588: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax) 589: # Verify that df.plot was called with correct parameters 590: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 591: E AssertionError: expected call not found. ... 609: E ? ^^^^^^ 610: E + 'color': [ 611: E ? ^ 612: E + '#22c55e', 613: E + ], 614: E 'kind': 'hist', 615: E 'legend': False, 616: E } 617: tests/plots/test_histogram.py:72: AssertionError 618: _____ test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan ______ 619: self = , args = () 620: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': True} 621: expected = call(kind='hist', ax=, legend=True, color=) 622: actual = call(kind='hist', ax=, legend=True, alpha=0.5, color=['#22c55e', '#3b82f6']) 623: _error_message = ._error_message at 0x7f3e2cfe8ea0> 624: cause = None 625: def assert_called_with(self, /, *args, **kwargs): 626: """assert that the last call was made with the specified arguments. 627: Raises an AssertionError if the args and keyword args passed in are 628: different to the last call to the mock.""" 629: if self.call_args is None: 630: expected = self._format_mock_call_signature(args, kwargs) 631: actual = 'not called.' 632: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 633: % (expected, actual)) 634: raise AssertionError(error_message) 635: def _error_message(): 636: msg = self._format_mock_failure_message(args, kwargs) 637: return msg 638: expected = self._call_matcher(_Call((args, kwargs), two=True)) 639: actual = self._call_matcher(self.call_args) 640: if actual != expected: 641: cause = expected if isinstance(expected, Exception) else None 642: > raise AssertionError(_error_message()) from cause 643: E AssertionError: expected call not found. 644: E Expected: plot(kind='hist', ax=, legend=True, color=) 645: E Actual: plot(kind='hist', ax=, legend=True, alpha=0.5, color=['#22c55e', '#3b82f6']) 646: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 650: def assert_called_once_with(self, /, *args, **kwargs): 651: """assert that the mock was called exactly once and that that call was 652: with the specified arguments.""" 653: if not self.call_count == 1: 654: msg = ("Expected '%s' to be called once. Called %s times.%s" 655: % (self._mock_name or 'mock', 656: self.call_count, 657: self._calls_repr())) 658: raise AssertionError(msg) 659: > return self.assert_called_with(*args, **kwargs) 660: E AssertionError: expected call not found. ... 679: E + 'color': [ 680: E ? ^ 681: E + '#22c55e', 682: E + '#3b82f6', 683: E + ], 684: E 'kind': 'hist', 685: E 'legend': True, 686: E } 687: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 697: def test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan(sample_dataframe, mocker): 698: """Test the plot function with multiple histograms and a legend when group_col is not None.""" 699: # Create the plot axis using plt.subplots() 700: _, ax = plt.subplots() 701: # Call the plot function with grouping 702: resulted_ax = plot(df=sample_dataframe, value_col="quantity", group_col="category", ax=ax) 703: # Verify that df.plot was called for multiple histograms with correct parameters 704: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=True, color=mocker.ANY) 705: E AssertionError: expected call not found. ... 724: E + 'color': [ 725: E ? ^ 726: E + '#22c55e', 727: E + '#3b82f6', 728: E + ], 729: E 'kind': 'hist', 730: E 'legend': True, 731: E } 732: tests/plots/test_histogram.py:98: AssertionError 733: ______ test_plot_multiple_histograms_with_legend_when_value_col_is_a_list ______ 734: self = , args = () 735: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': True} 736: expected = call(kind='hist', ax=, legend=True, color=) 737: actual = call(kind='hist', ax=, legend=True, color=['#22c55e', '#3b82f6'], alpha=0.5) 738: _error_message = ._error_message at 0x7f3e2cf87380> 739: cause = None 740: def assert_called_with(self, /, *args, **kwargs): 741: """assert that the last call was made with the specified arguments. 742: Raises an AssertionError if the args and keyword args passed in are 743: different to the last call to the mock.""" 744: if self.call_args is None: 745: expected = self._format_mock_call_signature(args, kwargs) 746: actual = 'not called.' 747: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 748: % (expected, actual)) 749: raise AssertionError(error_message) 750: def _error_message(): 751: msg = self._format_mock_failure_message(args, kwargs) 752: return msg 753: expected = self._call_matcher(_Call((args, kwargs), two=True)) 754: actual = self._call_matcher(self.call_args) 755: if actual != expected: 756: cause = expected if isinstance(expected, Exception) else None 757: > raise AssertionError(_error_message()) from cause 758: E AssertionError: expected call not found. 759: E Expected: plot(kind='hist', ax=, legend=True, color=) 760: E Actual: plot(kind='hist', ax=, legend=True, color=['#22c55e', '#3b82f6'], alpha=0.5) 761: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 765: def assert_called_once_with(self, /, *args, **kwargs): 766: """assert that the mock was called exactly once and that that call was 767: with the specified arguments.""" 768: if not self.call_count == 1: 769: msg = ("Expected '%s' to be called once. Called %s times.%s" 770: % (self._mock_name or 'mock', 771: self.call_count, 772: self._calls_repr())) 773: raise AssertionError(msg) 774: > return self.assert_called_with(*args, **kwargs) 775: E AssertionError: expected call not found. ... 794: E + 'color': [ 795: E ? ^ 796: E + '#22c55e', 797: E + '#3b82f6', 798: E + ], 799: E 'kind': 'hist', 800: E 'legend': True, 801: E } 802: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 812: def test_plot_multiple_histograms_with_legend_when_value_col_is_a_list(sample_dataframe, mocker): 813: """Test the plot function with multiple histograms and a legend when value_col is a list.""" 814: # Create the plot axis using plt.subplots() 815: _, ax = plt.subplots() 816: # Call the plot function with grouping 817: resulted_ax = plot(df=sample_dataframe, value_col=["quantity", "category"], ax=ax) 818: # Verify that df.plot was called for multiple histograms with correct parameters 819: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=True, color=mocker.ANY) 820: E AssertionError: expected call not found. ... 839: E + 'color': [ 840: E ? ^ 841: E + '#22c55e', 842: E + '#3b82f6', 843: E + ], 844: E 'kind': 'hist', 845: E 'legend': True, 846: E } 847: tests/plots/test_histogram.py:124: AssertionError 848: __________________________ test_plot_with_source_text __________________________ 849: self = , args = () 850: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 851: expected = call(kind='hist', ax=, legend=False, color=) 852: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 853: _error_message = ._error_message at 0x7f3e2987c720> 854: cause = None 855: def assert_called_with(self, /, *args, **kwargs): 856: """assert that the last call was made with the specified arguments. 857: Raises an AssertionError if the args and keyword args passed in are 858: different to the last call to the mock.""" 859: if self.call_args is None: 860: expected = self._format_mock_call_signature(args, kwargs) 861: actual = 'not called.' 862: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 863: % (expected, actual)) 864: raise AssertionError(error_message) 865: def _error_message(): 866: msg = self._format_mock_failure_message(args, kwargs) 867: return msg 868: expected = self._call_matcher(_Call((args, kwargs), two=True)) 869: actual = self._call_matcher(self.call_args) 870: if actual != expected: 871: cause = expected if isinstance(expected, Exception) else None 872: > raise AssertionError(_error_message()) from cause 873: E AssertionError: expected call not found. 874: E Expected: plot(kind='hist', ax=, legend=False, color=) 875: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 876: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 880: def assert_called_once_with(self, /, *args, **kwargs): 881: """assert that the mock was called exactly once and that that call was 882: with the specified arguments.""" 883: if not self.call_count == 1: 884: msg = ("Expected '%s' to be called once. Called %s times.%s" 885: % (self._mock_name or 'mock', 886: self.call_count, 887: self._calls_repr())) 888: raise AssertionError(msg) 889: > return self.assert_called_with(*args, **kwargs) 890: E AssertionError: expected call not found. ... 908: E ? ^^^^^^ 909: E + 'color': [ 910: E ? ^ 911: E + '#22c55e', 912: E + ], 913: E 'kind': 'hist', 914: E 'legend': False, 915: E } 916: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 926: def test_plot_with_source_text(sample_dataframe, mocker): 927: """Test the plot function with source text.""" 928: # Create the plot axis using plt.subplots() 929: _, ax = plt.subplots() 930: # Call the plot function with source text 931: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax, source_text="Source: Test Data") 932: # Verify that df.plot was called with correct parameters 933: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 934: E AssertionError: expected call not found. ... 952: E ? ^^^^^^ 953: E + 'color': [ 954: E ? ^ 955: E + '#22c55e', 956: E + ], 957: E 'kind': 'hist', 958: E 'legend': False, 959: E } 960: tests/plots/test_histogram.py:150: AssertionError 961: ___________________________ test_plot_custom_labels ____________________________ 962: self = , args = () 963: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 964: expected = call(kind='hist', ax=, legend=False, color=) 965: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 966: _error_message = ._error_message at 0x7f3e298600e0> 967: cause = None 968: def assert_called_with(self, /, *args, **kwargs): 969: """assert that the last call was made with the specified arguments. 970: Raises an AssertionError if the args and keyword args passed in are 971: different to the last call to the mock.""" 972: if self.call_args is None: 973: expected = self._format_mock_call_signature(args, kwargs) 974: actual = 'not called.' 975: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 976: % (expected, actual)) 977: raise AssertionError(error_message) 978: def _error_message(): 979: msg = self._format_mock_failure_message(args, kwargs) 980: return msg 981: expected = self._call_matcher(_Call((args, kwargs), two=True)) 982: actual = self._call_matcher(self.call_args) 983: if actual != expected: 984: cause = expected if isinstance(expected, Exception) else None 985: > raise AssertionError(_error_message()) from cause 986: E AssertionError: expected call not found. 987: E Expected: plot(kind='hist', ax=, legend=False, color=) 988: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 989: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 993: def assert_called_once_with(self, /, *args, **kwargs): 994: """assert that the mock was called exactly once and that that call was 995: with the specified arguments.""" 996: if not self.call_count == 1: 997: msg = ("Expected '%s' to be called once. Called %s times.%s" 998: % (self._mock_name or 'mock', 999: self.call_count, 1000: self._calls_repr())) 1001: raise AssertionError(msg) 1002: > return self.assert_called_with(*args, **kwargs) 1003: E AssertionError: expected call not found. ... 1021: E ? ^^^^^^ 1022: E + 'color': [ 1023: E ? ^ 1024: E + '#22c55e', 1025: E + ], 1026: E 'kind': 'hist', 1027: E 'legend': False, 1028: E } 1029: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 1039: def test_plot_custom_labels(sample_dataframe, mocker): 1040: """Test the plot function with custom x and y labels.""" 1041: # Create the plot axis using plt.subplots() 1042: _, ax = plt.subplots() 1043: # Call the plot function with custom labels for x and y axes 1044: resulted_ax = plot(df=sample_dataframe, value_col="quantity", ax=ax, x_label="Custom X", y_label="Custom Y") 1045: # Verify that df.plot was called with correct parameters 1046: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 1047: E AssertionError: expected call not found. ... 1065: E ? ^^^^^^ 1066: E + 'color': [ 1067: E ? ^ 1068: E + '#22c55e', 1069: E + ], 1070: E 'kind': 'hist', 1071: E 'legend': False, 1072: E } 1073: tests/plots/test_histogram.py:176: AssertionError 1074: ____________________________ test_plot_with_series _____________________________ 1075: self = , args = () 1076: kwargs = {'ax': , 'color': , 'kind': 'hist', 'legend': False} 1077: expected = call(kind='hist', ax=, legend=False, color=) 1078: actual = call(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 1079: _error_message = ._error_message at 0x7f3e2983d620> 1080: cause = None 1081: def assert_called_with(self, /, *args, **kwargs): 1082: """assert that the last call was made with the specified arguments. 1083: Raises an AssertionError if the args and keyword args passed in are 1084: different to the last call to the mock.""" 1085: if self.call_args is None: 1086: expected = self._format_mock_call_signature(args, kwargs) 1087: actual = 'not called.' 1088: error_message = ('expected call not found.\nExpected: %s\n Actual: %s' 1089: % (expected, actual)) 1090: raise AssertionError(error_message) 1091: def _error_message(): 1092: msg = self._format_mock_failure_message(args, kwargs) 1093: return msg 1094: expected = self._call_matcher(_Call((args, kwargs), two=True)) 1095: actual = self._call_matcher(self.call_args) 1096: if actual != expected: 1097: cause = expected if isinstance(expected, Exception) else None 1098: > raise AssertionError(_error_message()) from cause 1099: E AssertionError: expected call not found. 1100: E Expected: plot(kind='hist', ax=, legend=False, color=) 1101: E Actual: plot(kind='hist', ax=, legend=False, color=['#22c55e'], alpha=0.5) 1102: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:939: AssertionError ... 1106: def assert_called_once_with(self, /, *args, **kwargs): 1107: """assert that the mock was called exactly once and that that call was 1108: with the specified arguments.""" 1109: if not self.call_count == 1: 1110: msg = ("Expected '%s' to be called once. Called %s times.%s" 1111: % (self._mock_name or 'mock', 1112: self.call_count, 1113: self._calls_repr())) 1114: raise AssertionError(msg) 1115: > return self.assert_called_with(*args, **kwargs) 1116: E AssertionError: expected call not found. ... 1134: E ? ^^^^^^ 1135: E + 'color': [ 1136: E ? ^ 1137: E + '#22c55e', 1138: E + ], 1139: E 'kind': 'hist', 1140: E 'legend': False, 1141: E } 1142: /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/unittest/mock.py:951: AssertionError ... 1152: def test_plot_with_series(sample_series, mocker): 1153: """Test the plot function with a pandas series.""" 1154: # Create the plot axis using plt.subplots() 1155: _, ax = plt.subplots() 1156: # Call the plot function with a series (instead of dataframe and value_col) 1157: resulted_ax = plot(df=sample_series, ax=ax) 1158: # Verify that pd.Series.plot was called with correct parameters 1159: > pd.DataFrame.plot.assert_called_once_with(kind="hist", ax=ax, legend=False, color=mocker.ANY) 1160: E AssertionError: expected call not found. ... 1178: E ? ^^^^^^ 1179: E + 'color': [ 1180: E ? ^ 1181: E + '#22c55e', 1182: E + ], 1183: E 'kind': 'hist', 1184: E 'legend': False, 1185: E } 1186: tests/plots/test_histogram.py:202: AssertionError 1187: ___________________ test_get_base_cmap_three_or_fewer_colors ___________________ 1188: def test_get_base_cmap_three_or_fewer_colors(): 1189: """Test the get_base_cmap function with three or fewer colors.""" 1190: # Test with 3 colors (all green shades) 1191: > gen = get_base_cmap(num_colors=3) 1192: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1193: tests/test_standard_graphs.py:135: TypeError 1194: ________________________ test_get_base_cmap_two_colors _________________________ 1195: def test_get_base_cmap_two_colors(): 1196: """Test the get_base_cmap function with two colors.""" 1197: # Test with 2 colors (only green 500 and green 300) 1198: > gen = get_base_cmap(num_colors=2) 1199: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1200: tests/test_standard_graphs.py:148: TypeError 1201: _________________________ test_get_base_cmap_one_color _________________________ 1202: def test_get_base_cmap_one_color(): 1203: """Test the get_base_cmap function with one color.""" 1204: # Test with 1 color (only green 500) 1205: > gen = get_base_cmap(num_colors=1) 1206: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1207: tests/test_standard_graphs.py:161: TypeError 1208: __________________ test_get_base_cmap_more_than_three_colors ___________________ 1209: def test_get_base_cmap_more_than_three_colors(): 1210: """Test the get_base_cmap function with more than three colors.""" 1211: # Test with 4 colors (mix of all colors) 1212: > gen = get_base_cmap(num_colors=4) 1213: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1214: tests/test_standard_graphs.py:174: TypeError 1215: ________________ test_get_base_cmap_more_than_available_colors _________________ 1216: def test_get_base_cmap_more_than_available_colors(): 1217: """Test the get_base_cmap function with more colors than available.""" 1218: # Test with more colors than available, ensure cycling occurs 1219: > gen = get_base_cmap(num_colors=9) 1220: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1221: tests/test_standard_graphs.py:192: TypeError 1222: ______________________ test_get_base_cmap_cycle_behavior _______________________ 1223: def test_get_base_cmap_cycle_behavior(): 1224: """Test the cycling behavior of the get_base_cmap function.""" 1225: # Test with cycling colors 1226: > gen = get_base_cmap(num_colors=3) 1227: E TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1228: tests/test_standard_graphs.py:215: TypeError ... 1270: pyretailscience/standard_graphs.py 130 75 70 3 38% 67-119, 245-320, 376->379, 418-426, 436 1271: pyretailscience/style/graph_utils.py 90 17 28 8 74% 21-22, 68-69, 85-91, 130, 137->145, 145->153, 186-192, 207, 222->228, 227 1272: pyretailscience/style/tailwind.py 68 2 10 1 96% 316-317 1273: ------------------------------------------------------------------------------------ 1274: TOTAL 1041 430 438 28 57% 1275: 3 files skipped due to complete coverage. 1276: Coverage XML written to file coverage.xml 1277: =========================== short test summary info ============================ 1278: FAILED tests/plots/test_histogram.py::test_plot_single_histogram_no_legend - AssertionError: expected call not found. ... 1292: ? ^^^^^^ 1293: + 'color': [ 1294: ? ^ 1295: + '#22c55e', 1296: + ], 1297: 'kind': 'hist', 1298: 'legend': False, 1299: } 1300: FAILED tests/plots/test_histogram.py::test_plot_multiple_histograms_with_legend_when_group_col_is_not_nan - AssertionError: expected call not found. ... 1315: + 'color': [ 1316: ? ^ 1317: + '#22c55e', 1318: + '#3b82f6', 1319: + ], 1320: 'kind': 'hist', 1321: 'legend': True, 1322: } 1323: FAILED tests/plots/test_histogram.py::test_plot_multiple_histograms_with_legend_when_value_col_is_a_list - AssertionError: expected call not found. ... 1338: + 'color': [ 1339: ? ^ 1340: + '#22c55e', 1341: + '#3b82f6', 1342: + ], 1343: 'kind': 'hist', 1344: 'legend': True, 1345: } 1346: FAILED tests/plots/test_histogram.py::test_plot_with_source_text - AssertionError: expected call not found. ... 1360: ? ^^^^^^ 1361: + 'color': [ 1362: ? ^ 1363: + '#22c55e', 1364: + ], 1365: 'kind': 'hist', 1366: 'legend': False, 1367: } 1368: FAILED tests/plots/test_histogram.py::test_plot_custom_labels - AssertionError: expected call not found. ... 1382: ? ^^^^^^ 1383: + 'color': [ 1384: ? ^ 1385: + '#22c55e', 1386: + ], 1387: 'kind': 'hist', 1388: 'legend': False, 1389: } 1390: FAILED tests/plots/test_histogram.py::test_plot_with_series - AssertionError: expected call not found. ... 1404: ? ^^^^^^ 1405: + 'color': [ 1406: ? ^ 1407: + '#22c55e', 1408: + ], 1409: 'kind': 'hist', 1410: 'legend': False, 1411: } 1412: FAILED tests/test_standard_graphs.py::test_get_base_cmap_three_or_fewer_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1413: FAILED tests/test_standard_graphs.py::test_get_base_cmap_two_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1414: FAILED tests/test_standard_graphs.py::test_get_base_cmap_one_color - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1415: FAILED tests/test_standard_graphs.py::test_get_base_cmap_more_than_three_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1416: FAILED tests/test_standard_graphs.py::test_get_base_cmap_more_than_available_colors - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1417: FAILED tests/test_standard_graphs.py::test_get_base_cmap_cycle_behavior - TypeError: get_base_cmap() got an unexpected keyword argument 'num_colors' 1418: ================= 12 failed, 125 passed, 11 warnings in 4.55s ================== 1419: nbstripout...............................................................Passed 1420: ##[error]Process completed with exit code 1. ```

    ✨ CI feedback usage guide:
    The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
    codiumai-pr-agent-pro[bot] commented 2 months ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    CI Failure Feedback 🧐

    **Action:** Pre-Commit
    **Failed stage:** [Run Pre-commit](https://github.com/Data-Simply/pyretailscience/actions/runs/11150594039/job/30992086143) [❌]
    **Failed test name:** ruff-format
    **Failure summary:** The action failed due to the following reasons:
  • The ruff-format hook failed because it modified files during its execution. This indicates that the
    code did not meet the formatting standards enforced by the ruff-format hook.
  • A warning was issued regarding potential conflicts with the formatter due to the rules COM812 and
    ISC001. It is recommended to disable these rules to avoid unexpected behavior.
  • The trim trailing whitespace hook also failed, suggesting that some files contained trailing
    whitespace that needed to be removed.
  • Relevant error logs: ```yaml 1: ##[group]Operating System 2: Ubuntu ... 467: [INFO] This may take a few minutes... 468: [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 469: [INFO] Once installed this environment will be reused. 470: [INFO] This may take a few minutes... 471: [INFO] Installing environment for https://github.com/kynan/nbstripout. 472: [INFO] Once installed this environment will be reused. 473: [INFO] This may take a few minutes... 474: ruff.....................................................................Passed 475: ruff-format..............................................................Failed 476: - hook id: ruff-format 477: - files were modified by this hook 478: warning: The following rules may cause conflicts when used with the formatter: `COM812`, `ISC001`. To avoid unexpected behavior, we recommend disabling these rules, either by removing them from the `select` or `extend-select` configuration, or adding them to the `ignore` configuration. 479: 1 file reformatted, 26 files left unchanged 480: trim trailing whitespace.................................................Failed ... 484: Fixing docs/assets/images/analysis_modules/plots/line_plot.svg 485: Fixing docs/assets/images/analysis_modules/plots/histogram_plot.svg 486: fix end of files.........................................................Passed 487: fix python encoding pragma...............................................Passed 488: check yaml...............................................................Passed 489: debug statements (python)................................................Passed 490: pytest...................................................................Passed 491: nbstripout...............................................................Passed 492: ##[error]Process completed with exit code 1. ```

    ✨ CI feedback usage guide:
    The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
    codiumai-pr-agent-pro[bot] commented 1 month ago

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    CI Failure Feedback 🧐

    **Action:** Pre-Commit
    **Failed stage:** [Run Pre-commit](https://github.com/Data-Simply/pyretailscience/actions/runs/11158732636/job/31015645157) [❌]
    **Failed test name:** trim trailing whitespace
    **Failure summary:** The action failed because the check 'trim trailing whitespace' did not pass. This check is
    responsible for ensuring that there are no trailing whitespace characters in the files. The failure
    indicates that some files contained trailing whitespace that needed to be removed.
    Relevant error logs: ```yaml 1: ##[group]Operating System 2: Ubuntu ... 468: [INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. 469: [INFO] Once installed this environment will be reused. 470: [INFO] This may take a few minutes... 471: [INFO] Installing environment for https://github.com/kynan/nbstripout. 472: [INFO] Once installed this environment will be reused. 473: [INFO] This may take a few minutes... 474: ruff.....................................................................Passed 475: ruff-format..............................................................Passed 476: trim trailing whitespace.................................................Failed ... 480: Fixing docs/assets/images/analysis_modules/plots/line_plot.svg 481: Fixing docs/assets/images/analysis_modules/plots/histogram_plot.svg 482: fix end of files.........................................................Passed 483: fix python encoding pragma...............................................Passed 484: check yaml...............................................................Passed 485: debug statements (python)................................................Passed 486: pytest...................................................................Passed 487: nbstripout...............................................................Passed 488: ##[error]Process completed with exit code 1. ```

    ✨ CI feedback usage guide:
    The CI feedback tool (`/checks)` automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks: - Failed stage - Failed test name - Failure summary - Relevant error logs In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR: ``` /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}" ``` where `{repo_name}` is the name of the repository, `{run_number}` is the run number of the failed check, and `{job_number}` is the job number of the failed check. #### Configuration options - `enable_auto_checks_feedback` - if set to true, the tool will automatically provide feedback when a check is failed. Default is true. - `excluded_checks_list` - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list. - `enable_help_text` - if set to true, the tool will provide a help message with the feedback. Default is true. - `persistent_comment` - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true. - `final_update_message` - if `persistent_comment` is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true. See more information about the `checks` tool in the [docs](https://pr-agent-docs.codium.ai/tools/ci_feedback/).
    mvanwyk commented 1 month ago

    @Mrglglglglgl lgtm. Feel free to merge once you've fixed the pre-commit issues