rancher / system-upgrade-controller

In your Kubernetes, upgrading your nodes
Apache License 2.0
723 stars 86 forks source link

Allow same-version plan to run again #315

Closed jflambert closed 3 months ago

jflambert commented 3 months ago

Describe the solution you'd like I'd like to be able to run the same plan multiple times, even if the version is exactly the same

Additional context Here's what I currently do:

kubectl apply -f plan.yaml --server-side --force-conflicts

plan.yaml

apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
  name: my-plan
  namespace: system-upgrade
spec:
  concurrency: 1
  nodeSelector:
    matchExpressions:
    - {key: node-role.kubernetes.io/master, operator: Exists}
  serviceAccountName: system-upgrade
  version: v1.0
  upgrade:
    image: my-image

First time I run this, I get a new plan, and a job runs.

Second time I run it, nothing happens. If I do delete the plan, it comes back, but no job.

For various reasons, I'd like the upgrade job to execute again. How can I tell SUC to allow this pattern? I see the plan has status.latestHash and status.latestVersion, how can I reset those?

brandond commented 3 months ago

You have two choices: change the plan or delete/modify the plan label on the nodes.

jflambert commented 3 months ago

If I do delete the plan, it comes back, but no job.

Is this not enough? Forgive my ignorance. Which resource holds this "plan label"?

I should clarify I'm still using v0.13.0

brandond commented 3 months ago

the plan label on the nodes

Which resource holds this "plan label"

Like I said, the nodes.

The label is plan-specific, and includes a hash of the successfully applied plan. This is linked from the readme: https://github.com/rancher/system-upgrade-controller/blob/master/doc/architecture.png

jflambert commented 2 months ago

Thank you @brandond for your guidance. If anyone runs into this issue via google, here's how I solved it:

kubectl label node plan.upgrade.cattle.io/my-plan- --all >/dev/null