jonhoo / inferno

A Rust port of FlameGraph
Other
1.66k stars 118 forks source link

Investigate and improve allocation behavior of collapse-perf #44

Open masklinn opened 5 years ago

masklinn commented 5 years ago

Since the project is fairly recent I assume it's benched on a recent version of Rust (or even nightly, does Criterion require nightly?) That means it's using the system allocator, and IIRC flamegraph munging is a lot of string munging and other allocation-heavy tasks, jemalloc could well have an edge.

jonhoo commented 5 years ago

I wish we were as careful with allocations in collapse-perf as we are in flamegraph following #37, but I think it's actually a fair bit trickier there since we really do need to allocate for each distinct function name. We should be able to avoid allocations for repeated function names though; probably by keeping a single string that we write the "current" function name into, and then only copy that to its own string if we discover that we need to stick it in the stack count map.

In any case, yes, jemalloc might help a decent amount here. That said, I'd prefer to just get rid of those allocations altogether. And given that collapse-perf is now faster than perf script, it's not clear to me that we're in any rush to squeeze out that performance. :p Seeing jemalloc results would certainly be interesting and tell us how important getting rid of those allocations is though!

meven commented 5 years ago

inferno-collapse-perf-origin is vanilla inferno inferno-collapse-perf is using jemalloc

Using Jemalloc is most of the time faster but not significantly.

Benchmark #1: target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-iperf-stacks-pidtid-01.txt
  Time (mean ± σ):       4.4 ms ±   1.7 ms    [User: 3.6 ms, System: 0.8 ms]
  Range (min … max):     2.3 ms …  12.0 ms    314 runs

  Warning: Command took less than 5 ms to complete. Results might be inaccurate.

Benchmark #2: target/release/inferno-collapse-perf --all ./flamegraph/test/perf-iperf-stacks-pidtid-01.txt
  Time (mean ± σ):       4.4 ms ±   1.7 ms    [User: 3.4 ms, System: 1.0 ms]
  Range (min … max):     2.5 ms …  16.0 ms    592 runs

  Warning: Command took less than 5 ms to complete. Results might be inaccurate.
  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Summary
  'target/release/inferno-collapse-perf --all ./flamegraph/test/perf-iperf-stacks-pidtid-01.txt' ran
    1.01 ± 0.55 times faster than 'target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-iperf-stacks-pidtid-01.txt'

==>  ./flamegraph/test/perf-java-stacks-01.txt  <==
Benchmark #1: target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-java-stacks-01.txt
  Time (mean ± σ):       2.5 ms ±   0.9 ms    [User: 2.0 ms, System: 0.5 ms]
  Range (min … max):     1.5 ms …   9.3 ms    925 runs

  Warning: Command took less than 5 ms to complete. Results might be inaccurate.
  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Benchmark #2: target/release/inferno-collapse-perf --all ./flamegraph/test/perf-java-stacks-01.txt
  Time (mean ± σ):       2.3 ms ±   0.9 ms    [User: 1.8 ms, System: 0.5 ms]
  Range (min … max):     1.5 ms …   8.8 ms    857 runs

  Warning: Command took less than 5 ms to complete. Results might be inaccurate.
  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Summary
  'target/release/inferno-collapse-perf --all ./flamegraph/test/perf-java-stacks-01.txt' ran
    1.11 ± 0.59 times faster than 'target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-java-stacks-01.txt'

==>  ./flamegraph/test/perf-numa-stacks-01.txt  <==
Benchmark #1: target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-numa-stacks-01.txt
  Time (mean ± σ):       2.4 ms ±   0.8 ms    [User: 2.1 ms, System: 0.4 ms]
  Range (min … max):     1.8 ms …  19.2 ms    736 runs

  Warning: Command took less than 5 ms to complete. Results might be inaccurate.
  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Benchmark #2: target/release/inferno-collapse-perf --all ./flamegraph/test/perf-numa-stacks-01.txt
  Time (mean ± σ):       2.3 ms ±   0.4 ms    [User: 2.0 ms, System: 0.4 ms]
  Range (min … max):     2.0 ms …   7.1 ms    808 runs

  Warning: Command took less than 5 ms to complete. Results might be inaccurate.
  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Summary
  'target/release/inferno-collapse-perf --all ./flamegraph/test/perf-numa-stacks-01.txt' ran
    1.04 ± 0.41 times faster than 'target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-numa-stacks-01.txt'

==>  ./flamegraph/test/perf-rust-Yamakaky-dcpu.txt  <==
Benchmark #1: target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-rust-Yamakaky-dcpu.txt
  Time (mean ± σ):       1.7 ms ±   0.6 ms    [User: 1.4 ms, System: 0.3 ms]
  Range (min … max):     1.2 ms …   5.5 ms    896 runs

  Warning: Command took less than 5 ms to complete. Results might be inaccurate.
  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Benchmark #2: target/release/inferno-collapse-perf --all ./flamegraph/test/perf-rust-Yamakaky-dcpu.txt
  Time (mean ± σ):       1.8 ms ±   0.5 ms    [User: 1.5 ms, System: 0.3 ms]
  Range (min … max):     1.3 ms …   7.3 ms    1339 runs

  Warning: Command took less than 5 ms to complete. Results might be inaccurate.
  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Summary
  'target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-rust-Yamakaky-dcpu.txt' ran
    1.06 ± 0.48 times faster than 'target/release/inferno-collapse-perf --all ./flamegraph/test/perf-rust-Yamakaky-dcpu.txt'

==>  ./flamegraph/test/perf-vertx-stacks-01.txt  <==
Benchmark #1: target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-vertx-stacks-01.txt
  Time (mean ± σ):       9.0 ms ±   1.7 ms    [User: 8.2 ms, System: 0.8 ms]
  Range (min … max):     7.9 ms …  18.7 ms    249 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Benchmark #2: target/release/inferno-collapse-perf --all ./flamegraph/test/perf-vertx-stacks-01.txt
  Time (mean ± σ):       8.5 ms ±   1.4 ms    [User: 7.7 ms, System: 0.8 ms]
  Range (min … max):     7.6 ms …  17.6 ms    341 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Summary
  'target/release/inferno-collapse-perf --all ./flamegraph/test/perf-vertx-stacks-01.txt' ran
    1.06 ± 0.26 times faster than 'target/release/inferno-collapse-perf-origin --all ./flamegraph/test/perf-vertx-stacks-01.txt'
masklinn commented 5 years ago

Thanks for the check work @meven. The "gains" when they appear at all seem mostly irrelevant and insignificant, making the original suggestion irrelevant.

I won't close the issue as @jonhoo renamed it to something more useful, but either way the jemalloc track can safely be ignored.