cpp-linter / cpp-linter-rs

A CLI tool that scans a batch of files with clang-tidy and/or clang-format, then provides feedback in the form of comments, annotations, summaries, and reviews.
https://cpp-linter.github.io/cpp-linter-rs/
MIT License
1 stars 0 forks source link

add CI to detect performance regressions #53

Closed 2bndy5 closed 1 month ago

2bndy5 commented 1 month ago

Compares two release builds of cpp-linter binary and pure python package:

  1. the previous commit (for push events) or the base branch of a PR
  2. the newest commit on the branch
  3. the latest v1.x release of the pure-python cpp-linter package

Caching is enabled to reduce CI runtime.

Results are output to the CI workflow's job summary. This CI does not (currently) fail when a regression is detected.

Summary by CodeRabbit

Summary by CodeRabbit

2bndy5 commented 1 month ago

This is bugging me 😑

Locally I invoke the same commands (using the same exact machine dual-booted):

In the CI workflow (which uses ubuntu-latest), it takes around 165 seconds to run!

At least the runtime is consistent between pure-python and pure-rust. It does not seem to matter if I use clang v18 or v14.

coderabbitai[bot] commented 1 month ago

Walkthrough

A new performance regression testing workflow has been added to the cpp-linter project through the introduction of the perf-test.yml file in the GitHub Actions workflows. This workflow includes three jobs: building the project for current and previous commits, benchmarking performance differences, and reporting when no source changes occur. Additionally, a new script named perf_annotate.py has been created to analyze benchmark results from a JSON file, providing insights into performance changes and potential regressions.

Changes

File Path Change Summary
.github/workflows/perf-test.yml Introduced a new workflow for performance regression testing with jobs for building, benchmarking, and reporting.
.github/workflows/perf_annotate.py Added a script to analyze performance benchmarks from a JSON file, calculating differences and outputting results.

Possibly related PRs

Poem

🐰 In the meadow, where benchmarks play,
A new workflow hops in, brightening the day.
With builds and tests, it dances around,
Reporting changes, where performance is found.
So let’s cheer for the code, both swift and spry,
As we measure and analyze, oh me, oh my! 🌼


πŸ“œ Recent review details **Configuration used: CodeRabbit UI** **Review profile: CHILL**
πŸ“₯ Commits Files that changed from the base of the PR and between 0bdda0a762b1f84d34c8aa62a98fd98d2c67530e and 409a238111f7f3e1510333c6a7264e1a24b91aeb.
πŸ“’ Files selected for processing (2) * .github/workflows/perf-test.yml (1 hunks) * .github/workflows/perf_annotate.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2) * .github/workflows/perf-test.yml * .github/workflows/perf_annotate.py
--- Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
❀️ Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
πŸͺ§ Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit , please review it.` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` - `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (Invoked using PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. ### Other keywords and placeholders - Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. - Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description. - Add `@coderabbitai` anywhere in the PR title to generate the title automatically. ### CodeRabbit Configuration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](http://discord.gg/coderabbit) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
2bndy5 commented 1 month ago

I give up for now. I may play with this later when I experiment with improving the async performance in rust.