kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.33k stars 8.22k forks source link

Path is ingress controller metric is always "/" #9927

Closed AshokMishra closed 3 weeks ago

AshokMishra commented 1 year ago

What happened:

I am trying to create a dashboard based on which apis are called multiple time based on ingress controller metric but it seems the path label in metric is always set to "/". However, the logs have the correct value.

What you expected to happen:

I expect path value to be correctly populated for every request.

Example:

nginx_ingress_controller_response_size_bucket{app_kubernetes_io_component="controller", app_kubernetes_io_instance="colossus", app_kubernetes_io_name="ingress-nginx", controller_class="k8s.io/ingress-nginx", controller_namespace="ingress-controller", controller_pod="colossus-ingress-nginx-controller-7dcc75f798-7l6s6", exported_namespace="colossus-awx-operator", host="awx-portal1.colossus-staging.nvidia.com", ingress="portal1-awx-ingress", instance=":10254", job="kubernetes-pods", le="+Inf", method="GET", namespace="ingress-controller", path="/", pod="colossus-ingress-nginx-controller-7dcc75f798-7l6s6", pod_template_hash="7dcc75f798", service="portal1-service", status="200"}

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

Kubernetes version (use kubectl version): 1.22.8

Environment: Production

How to reproduce this issue:

Anything else we need to know:

k8s-ci-robot commented 1 year ago

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
longwuyuan commented 1 year ago

/remove-kind bug /kind support /help

can you show screenshots of metric with that path /

k8s-ci-robot commented 1 year ago

@longwuyuan: This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed by commenting with the /remove-help command.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/9927): >/remove-kind bug >/kind support >/help > >can you show screenshots of metric with that path / Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
sathieu commented 1 year ago

I think the path in the metric is the path from the ingress resource.

Having a metric per requested path will lead to metrics explosion 💥 .

joshuapare commented 1 year ago

+1 to @sathieu's comment. The size of the index in most Prometheus-based TSDB's to store those values would get extremely large. We hit this on one of our Express backends with the express-prom-bundle package with the path metrics feature enabled. When we didn't group static resources to a single path, the memory usage on the Prometheus server grew by 8x purely from the size of the indexes from all of the static assets being served out.

@AshokMishra can you confirm that you had indeed a single path of "/" on the Ingress object? Or, are you saying you had multiple paths on the ingress object and they all showed up as "/" in the metrics?

github-actions[bot] commented 1 year ago

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

ludat commented 5 months ago

One thing to mention is that the official dashboard for grafana shows the path and it's always /, it's pretty confusing

longwuyuan commented 3 weeks ago

While this would be a very nice to have feature, the project itself can not allocate any resources for it. As such there is no action for the project on this issue. if someone else wants to work on it, it will be a great value add.

Also needs to be mentioned that though this is desirable, the controller pods would die and never meet anyone's expectation of "NORMAL" if such a capability is added to the controller itself. For anyone able to extrapolate on the possible impact of this feature, imagine the resources like CPU/Memory/Networking/Storage required for having a metric count for every single path ever coming in on requests, ever, for the entire lifecycle of the controller pod and for the entire lifecycle of the prometheus instance. It will be too much by any standards but thosee seeking this could run their own fork of the code.

Since there is no action-item tracking in this issue, I will close it for now.

/close

k8s-ci-robot commented 3 weeks ago

@longwuyuan: Closing this issue.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/9927#issuecomment-2345707577): >While this would be a very nice to have feature, the project itself can not allocate any resources for it. As such there is no action for the project on this issue. if someone else wants to work on it, it will be a great value add. > >Also needs to be mentioned that though this is desirable, the controller pods would die and never meet anyone's expectation of "NORMAL" if such a capability is added to the controller itself. For anyone able to extrapolate on the possible impact of this feature, imagine the resources like CPU/Memory/Networking/Storage required for having a metric count for every single path ever coming in on requests, ever, for the entire lifecycle of the controller pod and for the entire lifecycle of the prometheus instance. It will be too much by any standards but thosee seeking this could run their own fork of the code. > >Since there is no action-item tracking in this issue, I will close it for now. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.