Open drewhagen opened 1 month ago
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Thanks y'all! I notice that #126096 is in active code review.
Also, @kubernetes/sig-windows-bugs The first release cut (1.32.0-alpha.1) is due in less than a week from today on Oct 1st 2024. Given that this flake is on master informing and is being addressed, can we consider this a Non-Blocker for this next release cut? Please advise - thank you!
This isn't a blocker.
These errors are failures in bringing up a test cluster and happen before we run any of the e2e tests. I think we need to figure out how to get more logs for these failures - either Azure ARM logs or possibly some logs from the capz-controllers
/cc @jsturtevant @ritikaguptams
Updated Windows 2022 in the job, but does not seems to have clear effects.
In parallel another infra issue is happening on CAPZ, related to the region availability and only gets fixed in the next retry.
--------------------------------------------------------------------------------
RESPONSE 409: 409 Conflict
ERROR CODE: SkuNotAvailable
--------------------------------------------------------------------------------
{
"error": {
"code": "SkuNotAvailable",
"message": "The requested VM size for resource 'Following SKUs have failed for Capacity Restrictions: Standard_D2s_v3' is currently not available in location 'westus2'. Please try another size or deploy to a different location or different zone. See https://aka.ms/azureskunotavailable for details.",
"target": "vmSize"
}
}
--------------------------------------------------------------------------------
/milestone v1.32
Hello @knabben @marosset. Thanks for taking action on this!
A friendly reminder of what's ahead:
02:00 UTC Friday November 8th 2024
(about 3 weeks from now), and while there is still time, we want to ensure that each PR has a chance to be merged on time.Given this timeline and capacity, will a fix for this continue to be aimed for the 1.32 release? Thanks! 😄 🚀
👋 @marosset @knabben
Thanks for updating Windows 2022 in that job. Is this still an issue, and do we plan to resolve for v1.32
?
To that end, I want to extend a friendly reminder that the code freeze is starting 02:00 UTC Friday November 8th 2024
(a little less than 1 week from now). Please make sure any new PRs have both lgtm
and approved
labels before the code freeze. Thanks! 👍
👋 Hello @marosset @knabben! Appreciate all of your efforts with this! Is the plan still to resolve this issue for v1.32 ? If so, a gentle reminder that the code freeze has started 02:00 UTC Friday November 8th 2024 . Please make sure any PRs have both lgtm and approved labels ASAP, and file an Exception. Thanks!
Which jobs are flaking
Which tests are flaking?
ci-kubernetes-e2e-capz-master-windows.Overall
Since when has it been flaking?
Failed runs:
Testgrid link
Testgrid link
Reason for failure (if possible)
Anything else we need to know?
Relevant SIG(s)
/sig windows /kind flake
cc: @kubernetes/release-team-release-signal