kubernetes-sigs / controller-runtime

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Apache License 2.0
2.58k stars 1.16k forks source link

Logging the reason for a reconcile #1997

Closed austince closed 9 months ago

austince commented 2 years ago

Hey there, is there any mechanism to log which event caused a reconcile request, or generally some other tooling to figure this out?

We have a controller that owns some resources directly, references others it does not own, and has a custom source for external triggers. All these different sources make it difficult to understand why a given reconcile request was triggered, especially in cases when trying to debug over-active reconcilers.

alvaroaleman commented 2 years ago

Not really, the problem is that the reconcile.Request get put into a workqueue that does de-duplication, so custom information like that can not be preserved (or at least not with the current workqueue implementation).

The best you can do is add logging to your eventsources and/or handlers.

austince commented 2 years ago

That makes sense, thanks for the reply. I think adding logging to the handlers seems like a good approach.

One complication I could see is that the builder doesn't expose ways to construct the handlers unless you use the lower-level Watches(..) method (i.e., not Owns(..) or For(..)).

For https://github.com/kubernetes-sigs/controller-runtime/blob/b93b5f92794b9383427995678d22ddac396dba13/pkg/builder/controller.go#L222-L227

Owns https://github.com/kubernetes-sigs/controller-runtime/blob/b93b5f92794b9383427995678d22ddac396dba13/pkg/builder/controller.go#L240-L243

What do you think about adding an option to modify the handler that is used?

I could see something like:

func (b *Builder) WithHandlerWrapper(func(handler.EventHandler) handler.EventHandler) *Builder { 
// ...
}

var _ ForOption = &HandlerWrapper{}
var _ OwnsOption = &HandlerWrapper{}

Then we could wrap the handler with a handler that just logs and delegates.

What do you think?

austince commented 2 years ago

Hey @alvaroaleman , hope you don't mind the ping. Just wondering if you have thoughts on this proposal / would be able to review a PR in that direction.

alvaroaleman commented 2 years ago

Just use watches and wrap the handler there, no reason to change the builder

austince commented 2 years ago

I think that's a decent workaround but forces us to reimplement the special setup for Owns and For. Do you have more concerns other than it's more to maintain?

anderssonw commented 1 year ago

Hey @austince, have you gotten a chance to look more into this? We are currently in the same boat and want to see why some of our reconciliation loops start in the first place, due to some odd inconsistencies.

If you have any insight I would be glad to take it :)

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

anderssonw commented 1 year ago

Bump @austince :)

anderssonw commented 1 year ago

/remove-lifecycle stale

austince commented 1 year ago

This is the recommended approach: https://github.com/kubernetes-sigs/controller-runtime/issues/1997#issuecomment-1277540770

anderssonw commented 1 year ago

Hmm, I'll peep once i get time to prioritise it :)

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 9 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/controller-runtime/issues/1997#issuecomment-1951399337): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.