Closed Raffo closed 1 year ago
Yeah. I've been meaning to rewrite the StatusReader to use composition instead of needing a list, that way we could easily allow for injecting the StatusReader into the Applier/Destroyer, which would allow users to inject their own implementations, extending or replacing the default exceptions.
It just hasn't gotten to the top of the priority list yet.
If you have a list of examples that need it, that might help raise the priority.
@mortent does your recent StatusReader change make it possible/easier to inject it into the Applier?
It does make it somewhat easier, but I think we can still make it better. With the recent change it is easier to create a new DelegatingStatusReader that includes custom StatusReader implementations. The custom ones will be prioritized before the built-in ones, so it allows for adding support for new types or replacing the behavior of existing ones.
Providing a custom StatusReader to the Applier/Destroyer currently requires providing a custom StatusWatcher. This is a bit awkward and we should probably allow setting the StatusReader through the Applier/Destroyer builder.
This is a bit awkward and we should probably allow setting the StatusReader through the Applier/Destroyer builder.
I am not familiar with Applier/Destroyer, but we (GitHub) are not using the apply functionality directly, only kstatus
as a library to implement a wait functionality, mostly because upstream Kubernetes does not have a standard/common way to do this for multiple resources. As we have now a couple of CRDs that we deal with, the need to customize the logic becomes more important for us. Additionally, even for core resources like Deployments or Job, we have the need to customize their "definition of done" during the wait due to specifics and limitations of our platform.
Does ☝️ answer your question above too @karlkfi ?
So if you only want to wait for a set of resources to reconcile with either the StatusPoller or the StatusWatcher, it is possible to change the set of StatusReaders used.
Example for the StatusWatcher: https://github.com/GoogleContainerTools/kpt/blob/main/pkg/status/poller.go#L43 Example for the StatusPoller: https://github.com/GoogleContainerTools/kpt/blob/main/pkg/status/poller.go#L28
Writing custom StatusReaders is possible and there is an example here. It currently requires more boilerplate than it should, but I haven't had time to fix it yet.
@mortent so what you are suggesting is to use https://github.com/kubernetes-sigs/cli-utils/blob/2cd8a3ab4594bddaf7a88ad7c950336867587e12/pkg/kstatus/polling/polling.go#L77 to plug custom status reader for individual resources, right? I have never found this and I indeed agree that it is quite a bit of boilerplate, but definitely better than what we are doing today.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Similar to https://github.com/kubernetes-sigs/cli-utils/issues/350, but not only related to Jobs, it would be great to have a way to override the behavior for the condition functions (i.e. those defined in https://github.com/kubernetes-sigs/cli-utils/blob/master/pkg/kstatus/status/core.go#L22 ) for both core types (i.e. Deployments) and individual CRDs. I tried looking at the code and docs, but I don't seem like there is an easy way to do so. I'd be happy to provide an upstream change if we agree that this makes sense for this library.
The use cases we have are:
Happy to hear your thoughts and thanks for this project, it has been really helpful.