Open pacoxu opened 7 months ago
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
STEP: mirror pod should restart with count 1 - k8s.io/kubernetes/test/e2e_node/mirror_pod_test.go:180 @ 01/23/24 20:07:57.962
[FAILED] Timed out after 126.004s.
Expected
<*fmt.wrapError | 0xc000a92ce0>:
expected the mirror pod "static-pod-f493ee91-48e0-4ead-a779-7984c9c9caaa-tmp-node-e2e-bc68fd6f-fedora-coreos-39-20240104-3-0-gcp-x86-64" to appear: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
{
msg: "expected the mirror pod \"static-pod-f493ee91-48e0-4ead-a779-7984c9c9caaa-tmp-node-e2e-bc68fd6f-fedora-coreos-39-20240104-3-0-gcp-x86-64\" to appear: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline",
err: <*fmt.wrapError | 0xc000a92cc0>{
msg: "client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline",
err: <*errors.errorString | 0xc000815a00>{
s: "rate: Wait(n=1) would exceed context deadline",
},
},
}
to be nil
BTW, the static pod becomes running seconds later(2-5s in recent flaking CIs).
flake once in https://testgrid.k8s.io/sig-release-master-informing#ci-crio-cgroupv1-node-e2e-conformance /cc @harche @SergeyKanzhelev
/assign @harche @rphillips
I do not see that test flaking in the recent runs at all https://testgrid.k8s.io/sig-release-master-informing#ci-crio-cgroupv2-node-e2e-conformance
It flaked again today: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/125510/pull-kubernetes-node-e2e-containerd/1803712091828260864 on an unrelated branch.
The graph shows it happens time-to-time https://storage.googleapis.com/k8s-triage/index.html?test=should%20successfully%20recreate%20when%20file%20is%20removed%20and%20recreated
Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?test=MirrorPod%20when%20create%20a%20mirror%20pod%20without%20changes
[sig-node] MirrorPod when create a mirror pod without changes should successfully recreate when file is removed and recreated [NodeConformance]
Which tests are flaking?
Since when has it been flaking?
NA
Testgrid link
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-evented-pleg
https://testgrid.k8s.io/sig-release-master-informing#ci-crio-cgroupv2-node-e2e-conformance
Reason for failure (if possible)
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-crio-cgroupv1-evented-pleg/1729257505558630400
Anything else we need to know?
It may be related to https://github.com/kubernetes/kubernetes/issues/121349.The related feature is https://github.com/kubernetes/enhancements/issues/3386After we revert EventedPLEG to alpha, it still flakes in
ci-crio-cgroupv2-node-e2e-conformance
Relevant SIG(s)
/sig node