dotnet / runtime

.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.
https://docs.microsoft.com/dotnet/core/
MIT License
15.15k stars 4.71k forks source link

[Perf] Windows/x64: Regressions in System.Text.RegularExpressions.Tests.Perf_Regex_Common 2/28/2024 9:23:39 AM #99318

Closed performanceautofiler[bot] closed 3 months ago

performanceautofiler[bot] commented 7 months ago

Run Information

Name Value
Architecture x64
OS Windows 10.0.22621
Queue OwlWindows
Baseline 8538e722e1f30c526827c7c9a6abfbee5ff3b164
Compare 5742895d7c7493dfae4ac40ab36019995d256dd1
Diff Diff
Configs CompilationMode:tiered, RunKind:micro

Regressions in System.Linq.Tests.Perf_Enumerable

Benchmark Baseline Test Test/Base Test Quality Edge Detector Baseline IR Compare IR IR Ratio
1.60 ns 5.38 ns 3.36 0.07 False

graph Test Report

Repro

General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Linq.Tests.Perf_Enumerable*'
### Payloads [Baseline]() [Compare]() ### System.Linq.Tests.Perf_Enumerable.ElementAt(input: IList) #### ETL Files #### Histogram #### JIT Disasms ### Docs [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)

performanceautofiler[bot] commented 7 months ago

Run Information

Name Value
Architecture x64
OS Windows 10.0.22621
Queue OwlWindows
Baseline 8538e722e1f30c526827c7c9a6abfbee5ff3b164
Compare 4c90107c80a7f8eb2f38f1494b4e17d48d5c7828
Diff Diff
Configs CompilationMode:tiered, RunKind:micro

Regressions in System.Text.RegularExpressions.Tests.Perf_Regex_Common

Benchmark Baseline Test Test/Base Test Quality Edge Detector Baseline IR Compare IR IR Ratio
80.90 ns 98.95 ns 1.22 0.20 False

graph Test Report

Repro

General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Text.RegularExpressions.Tests.Perf_Regex_Common*'
### Payloads [Baseline]() [Compare]() ### System.Text.RegularExpressions.Tests.Perf_Regex_Common.Backtracking(Options: Compiled) #### ETL Files #### Histogram #### JIT Disasms ### Docs [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)

Run Information

Name Value
Architecture x64
OS Windows 10.0.22621
Queue OwlWindows
Baseline d06ebfee1cfd1ef437784013e93ccfcd31334ac0
Compare 5742895d7c7493dfae4ac40ab36019995d256dd1
Diff Diff
Configs CompilationMode:tiered, RunKind:micro

Regressions in Benchstone.BenchF.Whetsto

Benchmark Baseline Test Test/Base Test Quality Edge Detector Baseline IR Compare IR IR Ratio
573.27 ms 614.59 ms 1.07 0.01 False

graph Test Report

Repro

General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md

git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'Benchstone.BenchF.Whetsto*'
### Payloads [Baseline]() [Compare]() ### Benchstone.BenchF.Whetsto.Test #### ETL Files #### Histogram #### JIT Disasms ### Docs [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
DrewScoggins commented 7 months ago

This is the range of commits, but there is nothing that seems to jump out, but we are seeing this regression across all of our configurations.

https://github.com/dotnet/runtime/compare/d06ebfee1cfd1ef437784013e93ccfcd31334ac0...e2acec9dfc2df4a067f2d029c65b830a4022ebd7

jeffschwMSFT commented 7 months ago

perhaps there is more than 1 reason for these regressions. given there is a Linq regression, I think that this change should be considered: https://github.com/dotnet/runtime/commit/e101ae2bd1c198ba7aaa209d1a4c55d6ce6b4073

ghost commented 7 months ago

Tagging subscribers to this area: @dotnet/area-system-linq See info in area-owners.md if you want to be subscribed.

Issue Details
### Run Information Name | Value -- | -- Architecture | x64 OS | Windows 10.0.22621 Queue | OwlWindows Baseline | [8538e722e1f30c526827c7c9a6abfbee5ff3b164](https://github.com/dotnet/runtime/commit/8538e722e1f30c526827c7c9a6abfbee5ff3b164) Compare | [5742895d7c7493dfae4ac40ab36019995d256dd1](https://github.com/dotnet/runtime/commit/5742895d7c7493dfae4ac40ab36019995d256dd1) Diff | [Diff](https://github.com/dotnet/runtime/compare/8538e722e1f30c526827c7c9a6abfbee5ff3b164...5742895d7c7493dfae4ac40ab36019995d256dd1) Configs | CompilationMode:tiered, RunKind:micro ### Regressions in System.Linq.Tests.Perf_Enumerable Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio -- | -- | -- | -- | -- | -- | -- | -- | -- |
  • [ElementAt - Duration of single invocation]()
  • πŸ“ - [Benchmark Source]()
  • [πŸ“ˆ - ADX Test Multi Config Graph]()
| 1.60 ns | 5.38 ns | 3.36 | 0.07 | False | | | ![graph]() [Test Report]() ### Repro General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md ```cmd git clone https://github.com/dotnet/performance.git py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Linq.Tests.Perf_Enumerable*' ```
### Payloads [Baseline]() [Compare]() ### System.Linq.Tests.Perf_Enumerable.ElementAt(input: IList) #### ETL Files #### Histogram #### JIT Disasms ### Docs [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
---
Author: performanceautofiler[bot]
Assignees: -
Labels: `area-System.Linq`, `os-windows`, `tenet-performance`, `tenet-performance-benchmarks`, `arch-x64`, `untriaged`, `runtime-coreclr`, `needs-area-label`
Milestone: -
stephentoub commented 7 months ago

perhaps there is more than 1 reason for these regressions. given there is a Linq regression, I think that this change should be considered: e101ae2

This could be related to the ElementAt test, and I can take a look at that one to see if I can repro. I don't think it could be related to the other two.

eiriktsarpalis commented 7 months ago

Presumably fixed by #99437

stephentoub commented 7 months ago

@eiriktsarpalis, what about the other tests?

eiriktsarpalis commented 7 months ago

I hadn't noticed that more regressions had been appended by the bot as a comment. Is that common?

stephentoub commented 3 months ago

All tests look to be back in normal ranges.