dotnet / performance

This repo contains benchmarks used for testing the performance of all .NET Runtimes
MIT License
701 stars 272 forks source link

[Perf] Windows/x64: Regressions in System.IO.Compression #4318

Open performanceautofiler[bot] opened 4 months ago

performanceautofiler[bot] commented 4 months ago

Run Information

Name Value
Architecture x64
OS Windows 10.0.22631
Queue ViperWindows
Baseline 101c0daf5aa76451304704481a0d82d328498950
Compare 1164d2fe49449a914cc86f3b59973be7a60668fd
Diff Diff
Configs CompilationMode:tiered, RunKind:micro

Regressions in System.IO.Compression.Deflate

Benchmark Baseline Test Test/Base Test Quality Edge Detector Baseline IR Compare IR IR Ratio
175.89 μs 233.63 μs 1.33 0.16 False

graph Test Report

Repro

General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.IO.Compression.Deflate*'
### System.IO.Compression.Deflate.Compress(level: Fastest, file: "sum") #### ETL Files #### Histogram #### JIT Disasms ### Docs [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)

Run Information

Name Value
Architecture x64
OS Windows 10.0.22631
Queue ViperWindows
Baseline 101c0daf5aa76451304704481a0d82d328498950
Compare 1164d2fe49449a914cc86f3b59973be7a60668fd
Diff Diff
Configs CompilationMode:tiered, RunKind:micro

Regressions in System.IO.Compression.Gzip

Benchmark Baseline Test Test/Base Test Quality Edge Detector Baseline IR Compare IR IR Ratio
172.41 μs 206.58 μs 1.20 0.08 False

graph Test Report

Repro

General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.IO.Compression.Gzip*'
### System.IO.Compression.Gzip.Compress(level: Fastest, file: "sum") #### ETL Files #### Histogram #### JIT Disasms ### Docs [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)

LoopedBard3 commented 4 months ago

Likely due to: https://github.com/dotnet/runtime/pull/104454 @carlossanlop

Improvements are listed in the PR https://github.com/dotnet/runtime/pull/104454

Other regressions:

carlossanlop commented 4 months ago

Improvements are listed in the PR dotnet/runtime#104454

I also ran microbenchmarks in a variety of machines and I got these results myself, which I posted here:

The maintainers of zlib-ng shared some cases where regressions are expected, starting with this comment and a few more underneath: https://github.com/dotnet/runtime/pull/102403#issuecomment-2197498867

But I have a question: Why am I seeing the exact same values between baseline and compare?

image

image

LoopedBard3 commented 4 months ago

That seems like a bug with the report generation, we will take a look into it. The numbers in the table on this issue look correct though so I would use those for the baseline value. You should also be able to get specific points after clicking on the graph in the report and hovering over the spot you want the value for.

DrewScoggins commented 3 months ago

Improvements are listed in the PR dotnet/runtime#104454

I also ran microbenchmarks in a variety of machines and I got these results myself, which I posted here:

The maintainers of zlib-ng shared some cases where regressions are expected, starting with this comment and a few more underneath: dotnet/runtime#102403 (comment)

But I have a question: Why am I seeing the exact same values between baseline and compare?

image

image

@carlossanlop

Almost certainly you are seeing the same values for baseline and compare because you are looking at the all test history pages that we generate. When we added support for that we just used our existing report template, and it was designed for generating reports with different baseline and compare values. Hope this makes sense.