cpp-linter / cpp-linter-rs

A CLI tool that scans a batch of files with clang-tidy and/or clang-format, then provides feedback in the form of comments, annotations, summaries, and reviews.
https://cpp-linter.github.io/cpp-linter-rs/
MIT License
1 stars 0 forks source link

verify concurrency behaves as intended #10

Open 2bndy5 opened 3 months ago

2bndy5 commented 3 months ago

Recently, I implemented the tokio runtime for asynchronous execution of

Although, I didn't notice any performance boost in the tests. I'm wondering if I missed something. Additionally, some await commands in some of the tests do not show as triggered in the coverage reports (using llvm-cov reports, not codecov which only shows line coverage).

2bndy5 commented 2 months ago

some await commands in some of the tests do not show as triggered in the coverage reports

This is a known problem. See https://github.com/rust-lang/rust/issues/98712

2bndy5 commented 1 month ago

I noticed that the -j, --jobs argument is not implemented here, but it is in v1.x. In resolving this issue, I should add the argument.

2bndy5 commented 1 month ago

Concurrency is working 👀

I built a release binary and I ran some simple benchmarks using

pure-python vs native rust

hyperfine --shell powershell --warmup 1 --runs 2 --command-name=pure-python 'cpp-linter -j=30 -l=false -i=!target' --command-name=pure-rust './target/release/cpp-linter.exe -l=0 -i=!target'
Benchmark 1: pure-python
  Time (mean ± σ):     46.621 s ±  0.870 s    [User: 185.804 s, System: 63.301 s]
  Range (min … max):   46.006 s … 47.236 s    2 runs

Benchmark 2: pure-rust
  Time (mean ± σ):     19.769 s ±  0.125 s    [User: 86.546 s, System: 20.152 s]
  Range (min … max):   19.680 s … 19.857 s    2 runs

Summary
  pure-rust ran
    2.36 ± 0.05 times faster than pure-python

2.36x faster is a significant improvement!

py-binding vs native rust

hyperfine --shell powershell --warmup 1 --runs 2 --command-name=py-binding 'cpp-linter -l=0 -i=!target' --command-name=native-rust './target/release/cpp-linter.exe -l=0 -i=!target'
Benchmark 1: py-binding
  Time (mean ± σ):     19.746 s ±  0.356 s    [User: 85.412 s, System: 20.656 s]
  Range (min … max):   19.494 s … 19.998 s    2 runs

Benchmark 2: native-rust
  Time (mean ± σ):     19.470 s ±  0.143 s    [User: 86.358 s, System: 21.062 s]
  Range (min … max):   19.369 s … 19.571 s    2 runs

Summary
  native-rust ran
    1.01 ± 0.02 times faster than py-binding

We'll call this a negligible result.

py-binding vs node-binding

hyperfine --shell powershell --warmup 1 --runs 2 --command-name=py-binding 'cpp-linter -l=false -i=!target' --command-name=node-binding 'node ./node-binding/cli.js -l=0 -i=!target'
Benchmark 1: py-binding
  Time (mean ± σ):     19.657 s ±  0.262 s    [User: 85.682 s, System: 21.050 s]
  Range (min … max):   19.472 s … 19.842 s    2 runs

Benchmark 2: node-binding
  Time (mean ± σ):     19.907 s ±  0.367 s    [User: 85.362 s, System: 20.558 s]
  Range (min … max):   19.647 s … 20.167 s    2 runs

Summary
  py-binding ran
    1.01 ± 0.02 times faster than node-binding

This showed a negligible difference in performance as well.

Still room for improvement 🤔

The benchmarks above do not use HTTP requests (which are also executed asynchronously in rust). And I'm not yet using tokio's async alternative of rust's std::process::Command (similar to python's subprocess.run).

Add a CI job to do benchmark

It would be nice to spot performance regressions using a CI workflow. This would require an additional checkout of parent commit (on push to main branch) or main branch (on PR sync event).