Open Chaunceyctx opened 1 year ago
/sig node /assign
@bobbypage @liggitt @tallclair @Random-Liu PTAL
/cc
/triage accepted
@klueska @dashpole @liggitt @derekwaynecarr @pacoxu PTAL. Thanks!
I have encountered the same bug, is there a plan to fix it?
I hope the community can fix this bug as soon as possible.
I also met this before, is there a fix plan?
@ffromani @tzneal I believe that the impact of this problem is actually very small, and the taints of the node will disappear briefly and then appear again. Therefore, I would like to ask if it is still necessary for us to repair this problem
What happened?
I have a k8s cluster(v1.27.2) containing one node. I set
evictionHard: nodefs.available: 90%
and write a large amount of data to kubelet rootdir(used 8GB / total 10GB ) to trigger eviction.node.kubernetes.io/disk-pressure
taint was added to this node. But when kubelet restarted, previous disk-pressure taint was wiped weirdly. And pending pod is normally scheduled to run on the current node. Then I checked the kubelet logs and found:kubelet restart:
update node status NodeHasNoDiskPressure
eviction manager start to synchronize
Q: Why does kubelet report NodeHasNoDiskPressure ? A: Eviction manager has not yet executed the synchronize method
What did you expect to happen?
previous node taint is not wiped
How can we reproduce it (as minimally and precisely as possible)?
Restart kubelet repeatedly after the disk pressure eviction is triggered. Observe
node.spec.taints
Anything else we need to know?
No response
Kubernetes version
1.27.2
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)