kubernetes-retired / cluster-api-provider-nested

Cluster API Provider for Nested Clusters
Apache License 2.0
301 stars 67 forks source link

Fix dws reconcile should return true by default #313

Closed wondywang closed 2 years ago

wondywang commented 2 years ago

What this PR does / why we need it: dws reconcile should return true by default. Otherwise, the workflow will jitter, with brief pauses.

Which issue(s) this PR fixes: Fixes #

k8s-ci-robot commented 2 years ago

Hi @wondywang. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
wondywang commented 2 years ago

PTAL @christopherhein

christopherhein commented 2 years ago

/ok-to-test

christopherhein commented 2 years ago

@wondywang Wouldn't jittering be expected from this so that we don't continue to retry failed queue operations? Maybe I'm mistaken by what this part was expecting to do.

cc @Fei-Guo you might have better context.

wondywang commented 2 years ago

thanks @christopherhein , of course, failed queue operations would be re-queued for processing, MultiClusterController would work like this (also a common controller implementation):

func (c *MultiClusterController) Start(stop <-chan struct{}) error {
    for i := 0; i < c.MaxConcurrentReconciles; i++ {
        // if worker return, it will run after JitterPeriod (1s).
        go wait.Until(c.worker, c.JitterPeriod, stop)
    }
    return nil
}

func (c *MultiClusterController) worker() {
    // here we expect processNextWorkItem always return true, unless the queue is shutdown.
    for c.processNextWorkItem() {
    }
}

func (c *MultiClusterController) processNextWorkItem() bool {
    if shutdown {
        // Stop working
        return false
    }

    // do some process, if failed, re-enqueue.

    return true
}

so, if we return false at the end of func processNextWorkItem, controller will stop a moment. This will be a long waiting of time if we encounter a large number of failures.

Fei-Guo commented 2 years ago

@christopherhein If you check the source code of Kubernetes controller such as https://github.com/kubernetes/kubernetes/blob/e450d35bb303042830a8ebfaad2010908df61208/pkg/controller/daemon/daemon_controller.go#L329, processNextWorkItem returns true in cases of failures. I think it works because we add rate limited for the failed keys hence, we are not doing busy retries for failures.

christopherhein commented 2 years ago

Gotcha, so we should go forward and approve this then, since we're not doing the right flow then to keep in-sync with expected controllers. Thanks @Fei-Guo

k8s-ci-robot commented 2 years ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: christopherhein, wondywang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[virtualcluster/OWNERS](https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/OWNERS)~~ [christopherhein] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment