cmyr / cargo-instruments

A cargo plugin to generate Xcode Instruments trace files
MIT License
680 stars 34 forks source link

Cannot run benches in the main target, examples, or libs #84

Open ariofrio opened 1 year ago

ariofrio commented 1 year ago

I have a simple project with a benchmark using Rust's built-in #[bench] functionality written alongside unit tests, in the same file as the code I'm testing.

Usually I run benchmarks using:

cargo bench

Or if I only want to run some functions:

cargo bench <name_of_function>

But when I try to run cargo-instruments, I get the following error:

$ cargo instruments -t time --bench
error: The argument '--bench <NAME>' requires a value but none was supplied

USAGE:
    cargo instruments --template <TEMPLATE> <--example <NAME>|--bin <NAME>|--bench <NAME>>

For more information try --help

I don't know what I should put in as the \ since the none of the following work:

cargo instruments -t time --bench <name_of_function>
cargo instruments -t time --bench <name_of_crate>
cargo instruments -t time --bench <name_of_crate>-test
cargo instruments -t time --bench <name_of_crate>-bench
cargo instruments -t time --bench crate
cargo instruments -t time --bench libtest
cargo instruments -t time --bench test
cargo instruments -t time --bench bench
cargo instruments -t time --bench benches
cargo instruments -t time --bench ''
cargo instruments -t time --bench -

I always get the following error:

Failed no bench target named `<the_name_i_tried>`.

no bench target named `<the_name_i_tried>`.

I get a similar error if I try to specify a name when with --bench using cargo bench:

$ cargo bench --bench <name_of_crate>
error: no bench target named `<name_of_crate>`.

However I can call cargo bench without arguments, or specify --bins, --bin <name_of_crate>, or --benches to run my benchmark without profiling.

Any ideas?

ariofrio commented 1 year ago

Looking a bit at the PR that added bench support in cargo-instrument and the CLI and API docs for cargo, it seems that cargo bench can run lib, bin, example, test, and bench targets in benchmark mode.

So to support this generally, something like --as-bench would be needed. Then I could run:

cargo instruments -t time --as-bench

Or even

# Don't run main(), just the #[bench] in the example.
# Maybe if main() does extra stuff for visualization you don't want to measure.
cargo instruments -t time --as-bench --example stress_test

And --bench would be reserved for selecting bench targets specifically (usually in the "benches" directory).

As far as the code, a good start would be to add a flag like --as-bench that switches CompileOptions::new(cfg, CompileMode::Build) to CompileOptions::new(cfg, CompileMode::Bench). I think that would take care of it?

As a further improvement to support all benchmarking targets that cargo bench supports, it may be necessary to add --lib and --test as well, though they would only make sense and be valid when --as-bench is specified.

I'm not sure if --as-bench is the best name for this (maybe --benchmark or --benchmarks?), but for analogy with cargo bench, --bench should probably be used to select bench targets specifically. It would be a footgun though, and maybe a nice error message could point users in the right direction when no value for --bench is provided.

brainstorm commented 1 year ago

I second this, I'm trying to run benchmarks on this branch for my project and cargo-instruments seems to somehow skip running those (benches based on criterion-rs)... with a "Success" message?:

(base) rvalls@m1 htsget-rs % time cargo instruments -t time --all-features --bench search-benchmarks
    Finished dev [unoptimized + debuginfo] target(s) in 0.33s
   Profiling target/debug/deps/search_benchmarks-ce5d25a9a6b1a25b with template 'Time Profiler'
Testing Queries/[LIGHT] Bam query all
Success
Testing Queries/[LIGHT] Bam query specific
Success
Testing Queries/[LIGHT] Bam query header
Success
Testing Queries/[LIGHT] Cram query all
Success
Testing Queries/[LIGHT] Cram query specific
Success
Testing Queries/[LIGHT] Cram query header
Success
Testing Queries/[LIGHT] Vcf query all
Success
Testing Queries/[LIGHT] Vcf query specific
Success
Testing Queries/[LIGHT] Vcf query header
Success
Testing Queries/[LIGHT] Bcf query all
Success
Testing Queries/[LIGHT] Bcf query specific
Success
Testing Queries/[LIGHT] Bcf query header
Success

  Trace file target/instruments/search_benchmarks-ce5d25a9a6b1a25b_Time-Profiler_2023-05-17_135809-276.trace
cargo instruments -t time --all-features --bench search-benchmarks

2.68s user 0.63s system 39% cpu 8.449 total

Whereas a plain cargo bench behaves as it should, running all benchmarks and taking quite a bit more time and report time and metrics for each:

 % time cargo bench
    Finished bench [optimized + debuginfo] target(s) in 0.34s
     Running unittests src/lib.rs (target/release/deps/htsget_actix-b4747d436621bd62)

running 13 tests
test tests::cors_preflight_request ... ignored
test tests::cors_simple_request ... ignored
test tests::get_http_tickets ... ignored
test tests::get_https_tickets ... ignored
test tests::parameterized_get_http_tickets ... ignored
test tests::parameterized_get_https_tickets ... ignored
test tests::parameterized_post_class_header_http_tickets ... ignored
test tests::parameterized_post_class_header_https_tickets ... ignored
test tests::parameterized_post_http_tickets ... ignored
test tests::parameterized_post_https_tickets ... ignored
test tests::post_http_tickets ... ignored
test tests::post_https_tickets ... ignored
test tests::service_info ... ignored

test result: ok. 0 passed; 0 failed; 13 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running unittests src/main.rs (target/release/deps/htsget_actix-e573a3ae2b442e58)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running benches/request_benchmarks.rs (target/release/deps/request_benchmarks-19f838a7f814f47b)
    Finished dev [unoptimized + debuginfo] target(s) in 0.19s
     Running `target/debug/htsget-actix`
1.5.0: Pulling from ga4gh/htsget-refserver
Digest: sha256:b93ab0593f58165351a136f19661228aec203ccaa74d746bd309e8936d8038de
Status: Image is up to date for ga4gh/htsget-refserver:1.5.0
docker.io/ga4gh/htsget-refserver:1.5.0
Server started on port 3000!
Requests/[LIGHT] simple request htsget-rs
                        time:   [2.2636 ms 2.2971 ms 2.3307 ms]
Requests/[LIGHT] simple request htsget-refserver
                        time:   [20.247 ms 20.563 ms 20.897 ms]
Found 2 outliers among 50 measurements (4.00%)
  2 (4.00%) high mild
Requests/[LIGHT] with region htsget-rs
                        time:   [7.1105 ms 7.2588 ms 7.4437 ms]
Found 3 outliers among 50 measurements (6.00%)
  3 (6.00%) high mild
Requests/[LIGHT] with region htsget-refserver
                        time:   [447.57 ms 450.06 ms 452.93 ms]
Found 1 outliers among 50 measurements (2.00%)
  1 (2.00%) high severe
Requests/[LIGHT] with two regions htsget-rs
                        time:   [12.268 ms 12.503 ms 12.807 ms]
Found 5 outliers among 50 measurements (10.00%)
  2 (4.00%) high mild
  3 (6.00%) high severe
Requests/[LIGHT] with two regions htsget-refserver
                        time:   [581.95 ms 582.78 ms 584.05 ms]
Found 1 outliers among 50 measurements (2.00%)
  1 (2.00%) high severe
Requests/[LIGHT] with VCF htsget-rs
                        time:   [2.6743 ms 2.6784 ms 2.6838 ms]
Found 3 outliers among 50 measurements (6.00%)
  2 (4.00%) high mild
  1 (2.00%) high severe
Requests/[LIGHT] with VCF htsget-refserver
                        time:   [197.09 ms 199.06 ms 201.37 ms]
Found 4 outliers among 50 measurements (8.00%)
  2 (4.00%) high mild
  2 (4.00%) high severe

18.59s user 49.83s system 18% cpu 6:09.93 total

I would expect traces to be generated for the benchmarks above, but it's not the case right now, although perhaps I'm using the CLI wrong and so perhaps @ariofrio advice on rethinking the benches CLI args makes sense, @cmyr?