Open kmurray01 opened 2 months ago
@derekbit @jan-g if you could please triage
yes, i met same issue.
sts with volumePersistentTemplate defined, generate the pv yaml like
metadata.name is not nested label in K8s(1.19-1.26), so node would not have the label, then pod schedule will meet the error "had volume node affinity conflict"
i met same issue.
Same problem. After draining node pod will be scheduled on another node without PV resulting in crashloop or another incorrect behavior of app inside.
Sorry for the delayed response. I'm on vacation this week. I plan to release 0.0.30 with a fix for the issue next week.
Observing this on the latest v0.0.29 release and master-head local-path-provisoner on K3s v1.30.4+k3s1.
PVs created with the new local-path-provisoner image have an updated
nodeAffinity.required.nodeSelectorTerms
as depicted belowThis change originated from PR https://github.com/rancher/local-path-provisioner/pull/414, that was subsequently included into the latest v0.0.29 release and master-head.
Deploying a pod specifying a
persistentVolumeClaim
to an associated pvc, the pod is scheduled on a different node, notmy-agent-host.example.com
. That pod then fails to initialize as it's unable to mount the PV volume path onmy-agent-host.example.com
.Previously on v0.0.28, the PV
nodeAffinity.required.nodeSelectorTerms
is as below and this works. The kube scheduler places the pod on the same node on which the PV local path volume is created, i.e.my-agent-host.example.com
in this example.It would seem switching the
nodeAffinity.required.nodeSelectorTerms
tomatchFields
for node fieldmetadata.name
does not work or the kube scheduler does not comply with that nodeAffinity.Also it's important to highlight on the K3s node, the value of
metadata.name
matchesmy-agent-host.example.com
.