Sonobuoy is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of Kubernetes conformance tests and other plugins in an accessible and non-destructive manner.
What steps did you take and what happened:
[A clear and concise description of what the bug is.]
Set up a node A and set two labels to A: a: b and c: d
Set up another node B and set one label to B: a: b
Set up a sonobuoy plugin with DaemonSet driver
We want to avoid the plugin running on the node A
Run a sonobuoy daemonset plugin with either PodSpec configuration below:
Case X: Put a node selector to run Pods only on node A and B and a node affinity to avoid running Pods on node A
Case Y: Put a node affinity that contains two match expressions in a node SelectorTerms: the one forces Pods to run only on node A and B and the other one prevents a Pod running on node A
Case X
podSpec:
nodeSelector:
a: b
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: c
operator: NotIn
values:
- d
Case Y
podSpec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: a
operator: In
values:
- b
- key: c
operator: NotIn
values:
- d
Run a sonobuoy daemonset plugin with above plugin congiguration
Sonobuoy counts the both node A and B as the available nodes
On our environment, node A is the Fargate node and node B is the normal node. DaemonSets cannot run on Fargate nodes, resulting the plugin always fails with No pod aws scheduleded on node A error.
What did you expect to happen:
Sonobuoy should not count the node A as an available node.
This issue is caused by inconsistency in handling nodeSelector and nodeAffinity between Kubernetes Scheduler and Sonobuoy.
Case X: Kubernetes schedules a Pod on a node that satisfies both nodeSelector and nodeAffinity.
If you specify both nodeSelector and nodeAffinity, both must be satisfied for the Pod to be scheduled onto a node.
Currently Sonobuoy performs like OR of nodeSelector and nodeAffinitycode.
Case Y: In Kubernetes, match expressions in a single matchExpressions field are ANDed.
If you specify multiple expressions in a single matchExpressions field associated with a term in nodeSelectorTerms, then the Pod can be scheduled onto a node only if all the expressions are satisfied (expressions are ANDed).
What steps did you take and what happened: [A clear and concise description of what the bug is.]
a: b
andc: d
a: b
DaemonSet
driverCase X
Case Y
On our environment, node A is the Fargate node and node B is the normal node. DaemonSets cannot run on Fargate nodes, resulting the plugin always fails with
No pod aws scheduleded on node A
error.What did you expect to happen:
Sonobuoy should not count the node A as an available node.
This issue is caused by inconsistency in handling
nodeSelector
andnodeAffinity
between Kubernetes Scheduler and Sonobuoy.Case X: Kubernetes schedules a Pod on a node that satisfies both
nodeSelector
andnodeAffinity
.Currently Sonobuoy performs like
OR
ofnodeSelector
andnodeAffinity
code.Case Y: In Kubernetes, match expressions in a single
matchExpressions
field are ANDed.Currently Sonobuoy counts a node as available if at least one expression is matched code.
Environment:
kubectl version
): Confirmed with multiple versions, 1.25~1.27/etc/os-release
): n/a