Closed performanceautofiler[bot] closed 3 months ago
Name | Value |
---|---|
Architecture | x64 |
OS | Windows 10.0.22621 |
Queue | OwlWindows |
Baseline | 8538e722e1f30c526827c7c9a6abfbee5ff3b164 |
Compare | 4c90107c80a7f8eb2f38f1494b4e17d48d5c7828 |
Diff | Diff |
Configs | CompilationMode:tiered, RunKind:micro |
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio |
---|---|---|---|---|---|---|---|---|
|
80.90 ns | 98.95 ns | 1.22 | 0.20 | False |
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Text.RegularExpressions.Tests.Perf_Regex_Common*'
Name | Value |
---|---|
Architecture | x64 |
OS | Windows 10.0.22621 |
Queue | OwlWindows |
Baseline | d06ebfee1cfd1ef437784013e93ccfcd31334ac0 |
Compare | 5742895d7c7493dfae4ac40ab36019995d256dd1 |
Diff | Diff |
Configs | CompilationMode:tiered, RunKind:micro |
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio |
---|---|---|---|---|---|---|---|---|
|
573.27 ms | 614.59 ms | 1.07 | 0.01 | False |
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'Benchstone.BenchF.Whetsto*'
This is the range of commits, but there is nothing that seems to jump out, but we are seeing this regression across all of our configurations.
perhaps there is more than 1 reason for these regressions. given there is a Linq regression, I think that this change should be considered: https://github.com/dotnet/runtime/commit/e101ae2bd1c198ba7aaa209d1a4c55d6ce6b4073
Tagging subscribers to this area: @dotnet/area-system-linq See info in area-owners.md if you want to be subscribed.
Author: | performanceautofiler[bot] |
---|---|
Assignees: | - |
Labels: | `area-System.Linq`, `os-windows`, `tenet-performance`, `tenet-performance-benchmarks`, `arch-x64`, `untriaged`, `runtime-coreclr`, `needs-area-label` |
Milestone: | - |
perhaps there is more than 1 reason for these regressions. given there is a Linq regression, I think that this change should be considered: e101ae2
This could be related to the ElementAt test, and I can take a look at that one to see if I can repro. I don't think it could be related to the other two.
Presumably fixed by #99437
@eiriktsarpalis, what about the other tests?
I hadn't noticed that more regressions had been appended by the bot as a comment. Is that common?
All tests look to be back in normal ranges.
Run Information
Regressions in System.Linq.Tests.Perf_Enumerable
Test Report
Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md