Open Duranna66 opened 5 months ago
Can you provide more details about what you expect and what you are seeing?
Can you provide more details about what you expect and what you are seeing?
I expect that pods who controlled by DaemonSet will be not delete when I do drain, but it is not. Example: "kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods "calico-node-pbzsr" is forbidden: User "system:serviceaccount:exmaple:users.tech-user-test" cannot delete resource "pods" in API group "" in the namespace "calico-system"","reason":"Forbidden","details":{"name":"calico-node-pbzsr","kind":"pods"},"code":403}
why k8s client tries to delete this ? thx for fb
This is how we check for membership in the DaemonSet:
Specifically looking for an owner reference with kind DaemonSet
.
Can you share the YAML for that calico pod (kubectl get pod -o yaml ...
) so that we can see what the reference is set to?
This is how we check for membership in the DaemonSet:
Specifically looking for an owner reference with kind
DaemonSet
.Can you share the YAML for that calico pod (
kubectl get pod -o yaml ...
) so that we can see what the reference is set to?
ownerReferences:
Okey, let's check this code from your link
for (V1Pod pod : allPods.getItems()) {
// at this point we know, that we have to ignore daemon set pods
if (pod.getMetadata().getOwnerReferences() != null) {
for (V1OwnerReference ref : pod.getMetadata().getOwnerReferences()) {
if (ref.getKind().equals("DaemonSet")) {
continue;
}
}
}
deletePod(api, pod.getMetadata().getName(), pod.getMetadata().getNamespace());
}
return node;
If we had DaemonSet we anyway delete this pod because continue works on second for
maybe try this one
boolean isDaemonSetPod; for (V1Pod pod : allPods.getItems()) { // at this point we know, that we have to ignore daemon set pods isDaemonSetPod = false; if (pod.getMetadata().getOwnerReferences() != null) { for (V1OwnerReference ref : pod.getMetadata().getOwnerReferences()) { if (ref.getKind().equals("DaemonSet")) { isDaemonSetPod = true; break; } } } if (!isDaemonSetPod) { deletePod(api, pod.getMetadata().getName(), pod.getMetadata().getNamespace()); }
We'd be happy to take a PR with any improvements to that code.
Oh, I see you sent #3537 thank you! I will review it.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Describe the bug A clear and concise description of what the bug is.
Client Version e.g.
19.0.0
Kubernetes Version e.g.
1.19.3
Java Version e.g. Java 17
To Reproduce Steps to reproduce the behavior: Kubectl.drain() .ignoreDaemonSets() .force() .name("nodeExample") .execute();
Expected behavior while drain daemonSets pods starts delete .
KubeConfig exmaple: clusters:
Server (please complete the following information):
Additional context