kubernetes / ingress-nginx

Ingress-NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.22k stars 8.2k forks source link

nginx_ingress_controller_requests is missing for Ingress that has had no requests #6937

Open lambchr opened 3 years ago

lambchr commented 3 years ago

NGINX Ingress controller version: 0.30.0

Kubernetes version (use kubectl version): 1.15

Environment:

What happened:

The nginx_ingress_controller_requests metric was missing for Ingresses that have had 0 requests. I queried nginx_ingress_controller_requests == 0 for our prometheus metrics and found no time series.

What you expected to happen:

I expected ingresses that have had no requests sent to them to have a nginx_ingress_controller_requests metric with a count of 0 rather than not being present.

How to reproduce it:

$ kubectl --context <CONTEXT> -n <NAMESPACE> get ing test
NAME       HOSTS          ADDRESS                 PORTS     AGE
test       <DOMAIN>  <LB_ADDRESS>      <PORTS>   7h40m

Anything else we need to know:

Please let me know if you need any other info from me, thanks for your time :)

/kind bug

kundan2707 commented 3 years ago

/ assign

strongjz commented 3 years ago

Please upgrade to 0.46.0 and see if this is still an issue.

strongjz commented 3 years ago

/triage needs-information

lambchr commented 3 years ago

Hi @strongjz sorry for the delayed response. I upgrade to 0.47.0 and repeated my above test (created new debug ingress, waited for host and address to appear, checked if metric appeared) and could still not see any nginx_ingress_controller_requests metrics from the /metrics endpoint of any NGINX pod in the cluster. When I curled my new debug ingress and checked the NGINX pods /metrics endpoint again it showed me an nginx_ingress_controller_requests metric for that ingress. Please let me know if you need any more info from me :)

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

lambchr commented 2 years ago

/remove-lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

lambchr commented 2 years ago

/remove-lifecycle stale

Hey @strongjz do you need any more information?

philwelz commented 2 years ago

i can confirm this behavior on version v1.0.4 / 4.0.6 where the metric nginx_ingress_controller_requests is missing until you do any request (for example curl) to an ingress resource.

iamNoah1 commented 2 years ago

/triage accepted /priority important-longterm

We are happy for any contributions /help /good-first-issue

k8s-ci-robot commented 2 years ago

@iamNoah1: This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-good-first-issue command.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/6937): >/triage accepted >/priority important-longterm > >We are happy for any contributions >/help >/good-first-issue Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
iamNoah1 commented 2 years ago

/remove needs-information

iamNoah1 commented 2 years ago

/remove triage/needs-information

iamNoah1 commented 2 years ago

/remove triage-needs-information

gauravkghildiyal commented 2 years ago

The issue seems to be a bit old, but I'm presenting my thoughts here to provide some closure to others who might see this.

I feel the problem at hand is that the metric itself could have multiple labels, and those labels could have multiple values. This makes it harder to initialize nginx_ingress_controller_requests for all combination of labels and values (maybe, some values might not even be known until the request arrives). Something like https://github.com/prometheus/client_golang/issues/190 might be helpful but even that might have problems with dynamic values of labels.

johurul000 commented 1 year ago

@iamNoah1 I am new to open source and Kubernetes, and I would like to work on this issue. Extra help would be great. šŸ™‚šŸ™‚

longwuyuan commented 1 year ago

@johurul000 can you write your own technical description of the problem to be solved here

johurul000 commented 1 year ago

@longwuyuan no, since I am new, guidance is very helpful

longwuyuan commented 1 year ago

I mean is the metric missing in your cluster also ?

kingli-crypto commented 1 year ago

Hope it helps a fellow user, I was using wildcard domain and I have to add --metrics-per-host=false for this metric to work.

Aut0R3V commented 1 year ago

Hey, is anyone still working on this? I think I can take it up if someone can guide me on how to achieve this.

DanielViniciusAlves commented 1 year ago

Hey @strongjz and @lambchr, the problem appointed in this issue still need to be fixed? If so I would like to work on this issue.

jmulcahyTSC commented 1 year ago

I too am running into this issue when trying to implement Flagger which refers to this metric in the documents. Would love an update on this.

SehiiRohoza commented 1 year ago

I'm running into this issue while using NGINX Ingress Prometheus Overview in GCP Monitoring with the latest version.

fyaretskiy commented 1 year ago

imo this is a prometheus issue, they could introduce a function that returns a 0 if null. This isn't an issue specific to nginx_ingress_controller_requests.

SehiiRohoza commented 1 year ago

imo this is a prometheus issue, they could introduce a function that returns a 0 if null. This isn't an issue specific to nginx_ingress_controller_requests.

Have you tried to report this "issue" to Prometheus? If yes, please share the link to your report.

fyaretskiy commented 1 year ago

imo this is a prometheus issue, they could introduce a function that returns a 0 if null. This isn't an issue specific to nginx_ingress_controller_requests.

Have you tried to report this "issue" to Prometheus? If yes, please share the link to your report.

I haven't, I have read read somewhere it's the responsibility of the metric reporter to pre populate the 0's. I can't find the github issue page now.

BenHesketh21 commented 5 months ago

I can easily reproduce this and can see that the metric doesn't appear until requests come through. However, I'm not sure there's an obvious fix. We could just set the metric to 0 when an ingress object is found, but the metric includes labels like: method, path and status (as in http status code). What should these be set too when we set the value to 0? I'm not sure there's a nice way to do this so that the metric appears right away and we don't end up with metrics that will always be 0 because the default labels we choose are never matched with a real request.

StuxxNet commented 5 months ago

@rikatz I know that I have a followup for the tests, but if you don't mind I would like to work on this issue :D

rikatz commented 5 months ago

/assign @StuxxNet Go for it

nisharyan commented 1 month ago

Any update on this?