rancher / fleet

Deploy workloads from Git to large fleets of Kubernetes clusters
https://fleet.rancher.io/
Apache License 2.0
1.52k stars 229 forks source link

If any clusters are offline, bundle status can get stuck #594

Open philomory opened 3 years ago

philomory commented 3 years ago

If any clusters are offline/unavailable, the status of Bundles that get deployed to those clusters can get stuck with misleading/confusing error messages.

Steps to reproduce:

  1. Create a fleet workspace, and add at least two clusters. Make one cluster an e.g. single-node k3s cluster, so that it is easy to start/stop the cluster. We'll call that cluster "A".
  2. Create a ClusterGroup that contains at least two of your clusters, including cluster A. We'll call this group "G".
  3. Create a git repository containing the following code:

    # test-bundle/fleet.yaml
    defaultNamespace: test-bundle
    helm:
      chart: ./chart
      releaseName: test-bundle
    
    # test-bundle/chart/Chart.yaml
    apiVersion: v2
    name: test-bundle
    version: 0.0.1
    
    # test-bundle/chart/templates/deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: test
    spec:
      selector:
        matchLabels:
          app: test
      template:
        metadata:
          labels:
            app: test
        spec:
          containers:
            - name: test
              image: paulbouwer/hello-kubernetes:1.10.1
              lifecycle:
                preStop:
                  exec:
                    command: ["sleep", "1"]
  4. Create a GitRepo object pointing to this repository that applies to ClusterGroup "G". We'll call this GitRepo "R".
  5. Let Fleet deploy the bundle from R to all clusters in G. Wait for them all to be Ready.
  6. Push the following change (which introduces an error) to the git repository R:
    --- a/test-bundle/chart/templates/deployment.yaml
    +++ b/test-bundle/chart/templates/deployment.yaml
    @@ -20,6 +20,6 @@ spec:
             - name: test
               image: paulbouwer/hello-kubernetes:1.10.1
               lifecycle:
    -            preStop:
    +            preStart:
                   exec:
                     command: ["sleep", "2"]
  7. Wait for Fleet to attempt to deploy this change. The GitRepo, Bundle, and all relevant BundleDeployments should end up in the state ErrApplied, with an error message similar to error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].lifecycle): unknown field "preStart" in io.k8s.api.core.v1.Lifecycle'.
  8. Shut off cluster A (stop the only node in the cluster).
  9. Wait until Cluster A shows as offline under Cluster Management in Rancher.
  10. Push the following change (which neither fixes the error nor introduces new errors) to GitRepo G:
    --- a/test-bundle/chart/templates/deployment.yaml
    +++ b/test-bundle/chart/templates/deployment.yaml
    @@ -18,7 +18,7 @@ spec:
         spec:
           containers:
             - name: test
    -          image: paulbouwer/hello-kubernetes:1.10.1
    +          image: paulbouwer/hello-kubernetes:1.10
               lifecycle:
                 preStart:
                   exec:
  11. Wait until the GitRepo UI in Fleet shows that it has picked up the new commit. The GitRepo, Bundle, and all relevant BundleDeployments will still be in the same error state as before.
  12. Push the following change (which fixes the initial error) to GitRepo G:
    --- a/test-bundle/chart/templates/deployment.yaml
    +++ b/test-bundle/chart/templates/deployment.yaml
    @@ -20,6 +20,6 @@ spec:
             - name: test
               image: paulbouwer/hello-kubernetes:1.10
               lifecycle:
    -            preStart:
    +            preStop:
                   exec:
                     command: ["sleep", "2"]
  13. Wait for the GitRepo UI in Fleet to show that it has picked up the new commit. Shortly, all BundleDeployments except that belonging to A will update to show that they are in a "Ready" state. However, the BundleDeployment for A will remain in the original error state, as will the Bundle and GitRepo.
  14. Observe that, in Fleet's UI, GitRepo R still displays an error message similar to Error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].lifecycle): unknown field "preStart" in io.k8s.api.core.v1.Lifecycle, even though the actual repository no longer contains any reference to a preStart field.

It is worth noting that, if step 10 is skipped - so that the commit in step 12 (which fixes the error) is the first commit to the repo after cluster A goes offline - then in step 12 the BundleDeployment for A will go to a "Wait Applied" state rather than being stuck in the error state.

strowi commented 1 year ago

Looks like we are running into this error 2years later with rancher 2.7.1. Having 1 downed Cluster block the whole process is what we were trying to circumvent with fleet. Any idea or timeline? Currently i have to change a selector to remove the cluster from the group.

manno commented 1 year ago

This might be related to default values in the rollout strategy. The defaults are documented in the fleet.yaml reference.

manno commented 3 months ago

Let's test if this still happens on 2.9.1

mmartin24 commented 3 months ago

It seems to still be happening on 2.9.1-rc3.

Adding some notes about how is it observed on this version:

On step 7 state is either Not Ready or Modified. Nevertheless, an error message is displayed: Error log:

Modified(3) [Bundle repo-r-test-bundle]; deployment.apps test-bundle/test modified {"spec":{"template":{"spec":{"containers":[{"image":"paulbouwer/hello-kubernetes:1.10.1","imagePullPolicy":"IfNotPresent","lifecycle":{"preStart":{"exec":{"command":["sleep","2"]}}},"name":"test","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}]}}}}

image

After step 10 State is Wait Applied directly: image

After 'fix' from step 12 state of Git Repo is Wait Applied, yet, similar result to the originally described happens:

Error:

WaitApplied(1) [Bundle repo-r-test-bundle]; deployment.apps test-bundle/test modified {"spec":{"template":{"spec":{"containers":[{"image":"paulbouwer/hello-kubernetes:1.10.1","imagePullPolicy":"IfNotPresent","lifecycle":{"preStart":{"exec":{"command":["sleep","2"]}}},"name":"test","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}]}}}}

image

image

manno commented 2 months ago

When working on this we should

weyfonk commented 1 month ago

This might be related to default values in the rollout strategy. The defaults are documented in the fleet.yaml reference.

Rollout strategy does not seem to be the culprit here, as setting maxUnavailablePartitions to 100% bundle-wide does not change anything. Reproduced this with Fleet standalone (latest main).

weyfonk commented 1 month ago

The issue here is that since a bundle deployment's status is updated by the agent living in the bundle deployment's target cluster, a bundle deployment targeting a downstream cluster will not have its status updated once that cluster is offline.

With status data being propagated from bundle deployment upwards to bundles and GitRepos, then to clusters and cluster groups, this explains why those resources have their statuses still showing an outdated modified status.

A solution for this could consist in watching a Fleet Cluster resource's agent lastSeen timestamp, which is also updated by the agent from the downstream cluster, and will therefore not be updated anymore once the cluster is offline. Fleet would then need to update statuses of all bundle deployments in that cluster once more than $threshold has elapsed since that lastSeen timestamp, for those status updates to then be propagated to other resources. $threshold could be a hard-coded value or left configurable, with a sensible default value (eg. 15 or more minutes).