kubernetes-sigs / node-feature-discovery

Node feature discovery for Kubernetes
Apache License 2.0
787 stars 242 forks source link

NFD will remove and re-add node labels if nfd-worker pod is deleted (and re-created by the nfd-worker DS) #1752

Open adrianchiris opened 4 months ago

adrianchiris commented 4 months ago

What happened:

NFD will remove any node labels associated with NodeFeature of a specific node if nfd-worker pod of that node gets deleted. after pod delete, it will get re-created, which will then recreate NodeFeature CR for the node and labels will be back (same goes for annotations, extendedResources).

workloads that rely on such labels in their nodeSelector/affinity will get disrupted as they will be removed and re scheduled.

This happens since nfd-worker is creating NodeFeature CR with OwnerReference pointing to itself[1]

[1] https://github.com/kubernetes-sigs/node-feature-discovery/blob/0418e7ddf33424b150c68ca8fe71fcfc98440039/pkg/nfd-worker/nfd-worker.go#L716

What you expected to happen:

At the end id expect labels to not get removed if nfd-worker pod get restarted. going further into the details, id expect NodeFeature CR is not deleted if pod is deleted.

This can be achieved by setting owner reference to nfd-worker daemonset which is not as ephemeral as the pod it creates. In addition to deal with redeploying daemonset with different selectors/affinity/tolerations the gc component can be extended to clean up NodeFeature objects for nodes that are not intended to run nfd-worker pods.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

ArangoGutierrez commented 4 months ago

I like the first of the idea This can be achieved by setting owner reference to nfd-worker daemonset which is not as ephemeral as the pod it creates.

but the second part gc component can be extended to clean up NodeFeature objects for nodes that are not intended to run nfd-worker pods How will GC know which nodes are tainted for the worker? A label?

adrianchiris commented 4 months ago

but the second part gc component can be extended to clean up NodeFeature objects for nodes that are not intended to run nfd-worker pods How will GC know which nodes are tainted for the worker? A label?

this bit is intended to handle update of nfd-worker ds selectors/affinity/tolerations where nfd-worker pods may get removed from some nodes in the cluster. this can be an additional improvement (separate PR ?) as im not sure how often this will happen.

what i was thinking re the GC flow for this case:

other ideas are welcome :)

ArangoGutierrez commented 4 months ago

will require to GET the node and the nfd-worker DS and check selectors, affinity, tolerations against the node obj

I would prefer to go for something like finalizers initially and check if that's enough. Having an annotation that the GC can read, and if present, not remove the NF from the Node. Would this be enough?

adrianchiris commented 4 months ago

I would prefer to go for something like finalizers initially and check if that's enough. Having an annotation that the GC can read, and if present, not remove the NF from the Node. Would this be enough?

how would this work ? who adds/removes the finalizer ? AFAIU finalizers prevent deletion, in our case we want to trigger deletion for "orphaned" NF.

ArangoGutierrez commented 4 months ago

yeah you are right, after thinking about it, your idea is the right approach.

ArangoGutierrez commented 4 months ago

I think we can split this issue into 2 action items:

ArangoGutierrez commented 4 months ago

First PR to address this issue:

marquiz commented 4 months ago

I was exploring this very issue in the v0.16 cycle but didn't come up with any good solution (lack of bandwidth). I was pondering three possible solutions, two of which have been mentioned here:

  1. Have a grace period (in the nfd-master) after NF delete event, before updating the node
  2. Use finalizers
  3. Copy worker pod's owner refs to NF

From these 2) isn't viable (afaiu) as the finalizer will only delay the deletion of NF but you cannot undelete it when the new worker pod comes up. For 1) I did some prototyping but the code ended up hairy (at least in my hands) and this felt like a probable source of caveats and different problems. So maybe 3) would be the least problematic solution.

For the possible GC improvements (if we need/want it), could we exploit the owner refs for that, too. E.g. see if the owner pod of NF exists, if not, mark the NF as orphaned. On the next gc round, if the NF still is orphaned (and the owner pod uuid hasn't changed), delete the NF.

Thoughts?

adrianchiris commented 4 months ago

For the possible GC improvements (if we need/want it), could we exploit the owner refs for that, too. E.g. see if the owner pod of NF exists, if not, mark the NF as orphaned. On the next gc round, if the NF still is orphaned (and the owner pod uuid hasn't changed), delete the NF.

Hey @marquiz ! i didnt understand how this will work.

if we set NF ownerReference to nfd-worker DS then if the DS is deleted (e.g helm uninstall of NFD) then k8s will garbage collect NF objects which is what we want.

if user updates the daemonset (e.g changes pod template node affinity via helm update) then worker DS will be updated and pods will get re-created potentially running on different nodes. in this case the owner of NF is still the same DS.

is this not the case ? if it is, then for this case i dont think we can use OwnerReference can we ?

marquiz commented 4 months ago

i didnt understand how this will work.

This would rely on having two owner references, both the DS and the Pod.

a) If the DS is deleted then both DS and Pod are gone -> NF will be GC'd by kubernetes

b) If only the Pod is gone but DS remains -> NF will not be GC'd by kubernetes. However, nfd-gc could detect this situation and GC the NF

Makes sense?

adrianchiris commented 4 months ago

Makes sense?

yes it does, thx.

ArangoGutierrez commented 3 months ago

https://github.com/kubernetes/enhancements/issues/4753

k8s-triage-robot commented 4 weeks ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale