Closed v-wenyuxu closed 2 months ago
Tagging subscribers to this area: @roji, @ajcvickers See info in area-owners.md if you want to be subscribed.
Hit in innerloop run in https://dev.azure.com/dnceng-public/public/_build/results?buildId=772593&view=results without JitStress
@krwq @vcsjones - any ideas who might be the best owner for this issue?
Tagging subscribers to this area: @dotnet/area-system-security, @bartonjs, @vcsjones See info in area-owners.md if you want to be subscribed.
This is hit almost once a day.
This is slightly odd. Locally, the remote executor finish its work in < 1 second. Is CI so bogged down that it takes 5 seconds to spin up a process and run this functionality?
Does that only happen in stress runs?
We can extend the timeout to 10 seconds, but it seems like that shouldn't be needed.
Is CI so bogged down that it takes 5 seconds to spin up a process and run this functionality?
It is not unusual for process start to take this long on overloaded CI machines.
One option is to move the tests that are sensitive to timeouts to outer loop.
Does that only happen in stress runs?
It just happened on a regular CI run in my PR (no stress).
Failed in: runtime-coreclr libraries-jitstress 20240728.1
Failed tests:
Error message:
Stack trace:
Known Issue Error Message
Fill the error message using step by step known issues guidance.
Known issue validation
Build: :mag_right: https://dev.azure.com/dnceng-public/public/_build/results?buildId=758024 Error message validated:
[System.Security.Cryptography.X509Certificates.Tests.PfxIterationCountTests_X509Certificate2.Import_IterationCountLimitExceeded_ThrowsInAllottedTime [FAIL]
] Result validation: :white_check_mark: Known issue matched with the provided build. Validation performed at: 8/12/2024 2:00:03 PM UTCReport
Summary