I have a cluster with 1 master and 2 worker nodes. One of the workers is reporting MemoryPressure condition.
When I deploy a new Pod with an ephemeral volume backed by OpenEBS LVM-localpv, the volume is sometimes created in the Node under memory pressure. Once the volume (PV) is created in one of these Nodes, the scheduler fails to schedule the Pod since the node where the volume is is not eligible due the the memory problem. The situation remains indefinitely.
0/3 nodes are available: 1 Insufficient memory, 1 node(s) had taint {[node-role.kubernetes.io/master](http://node-role.kubernetes.io/master): }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict.
What did you expect to happen:
I expected the volume to always be created in a "healthy" Node, not one reporting a condition, where the Pod can't be scheduled afterwards.
Environment:
LVM Driver version: 0.8.3
Kubernetes version: 1.21.10
Cloud provider or hardware configuration: baremetal
This behavior was due to using volumeBindingMode: Immediate instead of volumeBindingMode: WaitForFirstConsumer by mistake. Once I changed it in the StorageClass, the problem disappeared.
What steps did you take and what happened:
I have a cluster with 1 master and 2 worker nodes. One of the workers is reporting
MemoryPressure
condition.When I deploy a new Pod with an ephemeral volume backed by OpenEBS LVM-localpv, the volume is sometimes created in the Node under memory pressure. Once the volume (PV) is created in one of these Nodes, the scheduler fails to schedule the Pod since the node where the volume is is not eligible due the the memory problem. The situation remains indefinitely.
Pod volume definition:
Event describing the Pod scheduling problem:
What did you expect to happen:
I expected the volume to always be created in a "healthy" Node, not one reporting a condition, where the Pod can't be scheduled afterwards.
Environment: