tektoncd / pipeline

A cloud-native Pipeline resource.
https://tekton.dev
Apache License 2.0
8.37k stars 1.76k forks source link

webhook_request_latencies_bucket metric keeps adding new data series and became unusable #3171

Open r0bj opened 3 years ago

r0bj commented 3 years ago

Expected Behavior

Prometheus metric webhook_request_latencies_bucket is usable in real environment, don't add new data series forever. Prometheus is able to query that metric.

Actual Behavior

Prometheus metric webhook_request_latencies_bucket creates so many data series that is practically impossible to query in prometheus (too much data). It keeps adding new series while it's running so number of series increase forever. Restart pod tekton-pipelines-webhook resets number of series and fixes issue.

Steps to Reproduce the Problem

Run tekton-pipelines-webhook.

Additional Info

ywluogg commented 3 years ago

I can take a look at it if no one else is. But it may take a longer time for me.

ywluogg commented 3 years ago

@ImJasonH will this be suitable as a good first issue?

imjasonh commented 3 years ago

/assign ywluogg

cc @NavidZ since this relates to metrics

eddie4941 commented 3 years ago

Dropping this here for context. The webhook_request_latencies_bucket (and others) metrics is heavily influenced by the labels in question here: https://github.com/knative/pkg/pull/1464/files

Removing the labels in that pull request might help reduce the number of unique webhook_request_latencies_bucket metrics the webhook has to manage.

Aside from this, I don't know if theres a way to configure the metrics code to purge metrics from the in memory store after a period of time. This would help too. Most of the time, the in memory stuff is sent to a backend like Prometheus, stack driver, etc anyway.

tekton-robot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

r0bj commented 3 years ago

/remove-lifecycle stale

tekton-robot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

r0bj commented 3 years ago

/remove-lifecycle stale

tekton-robot commented 2 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

r0bj commented 2 years ago

/remove-lifecycle stale

jerop commented 2 years ago

@ywluogg are you still looking into this?

@vdemeester looks like this issue would be addressed by TEP-0073: Simplify metrics, right?

ywluogg commented 2 years ago

Hi jerop@ I'm not looking into this anymore. Please unassign me. Thanks!

tekton-robot commented 2 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

r0bj commented 2 years ago

/remove-lifecycle stale

tekton-robot commented 2 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

QuanZhang-William commented 1 year ago

/assign @QuanZhang-William

tekton-robot commented 1 year ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

r0bj commented 1 year ago

/remove-lifecycle stale

syurevich commented 1 year ago

We have a very similar problem. Many metrics have a resource_namespace label. In our case, these namespaces have randomly generated names and live for a short time. This leads to a very high cardinality for the resource_namespace label in about one week. That huge number of series results in a growing memory consumption.

I agree with @eddie4941 that configuring the metrics code to purge metrics from the in memory store after a period of time would help.

pritidesai commented 1 year ago

Based on the discussion in the API WG: /assign @khrm

tekton-robot commented 1 year ago

@pritidesai: GitHub didn't allow me to assign the following users: khrm.

Note that only tektoncd members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide

In response to [this](https://github.com/tektoncd/pipeline/issues/3171#issuecomment-1638555388): >Based on the discussion in the API WG: >/assign @khrm > > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
khrm commented 1 year ago

/assign @khrm

khrm commented 11 months ago

@pritidesai This was fixed by https://github.com/knative/pkg/pull/1464

So we can close this.

/close

tekton-robot commented 11 months ago

@khrm: You can't close an active issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/tektoncd/pipeline/issues/3171#issuecomment-1658682419): >@pritidesai This was fixed by https://github.com/knative/pkg/pull/1464 > >So we can close this. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
syurevich commented 11 months ago

@khrm not only resource_name label but also resource_namespace label can contribute to this "high cardinality" issue. To fix it for every use case one would need to purge metrics from the in memory store after a period of time.

tekton-robot commented 8 months ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

syurevich commented 8 months ago

This issue is still relevant. See this comment as well as this.

zhouhaibing089 commented 6 months ago

We have the same issue, too.

I have a proposal for knative/pkg at https://github.com/knative/pkg/pull/2931.

zhouhaibing089 commented 3 months ago

knative/pkg now gives the option to exclude arbitrary tags. I assume the next action item is to bump knative/pkg and customize the webhook options.

afrittoli commented 2 months ago

@khrm are you still working on this issue? We marked it as "blocking" for a v1 release and we would like to make a v1 release in July. If you are working on it, will you be able to solve this until then? Thank you!

khrm commented 2 months ago

@afrittoli This seems to be resolved. Will be fixed by knative update.

Parsavali commented 1 month ago

Do you know when this fix is expected to be part of Tekton?

zhouhaibing089 commented 1 month ago

FWIW, besides bumping knative.dev/pkg, it is also necessary to override the default StatReporter options.

zhouhaibing089 commented 1 month ago

Since #7989 is in, I am sending #8033 as a followup which actually addresses this issue.

I could anticipate that there might be asks to make it as a configuration option, and maybe off by default, so if any pointers, that would be greatly appreciated.