Closed AshokMishra closed 3 weeks ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug /kind support /help
can you show screenshots of metric with that path /
@longwuyuan: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
I think the path in the metric is the path
from the ingress
resource.
Having a metric per requested path will lead to metrics explosion 💥 .
+1 to @sathieu's comment. The size of the index in most Prometheus-based TSDB's to store those values would get extremely large. We hit this on one of our Express backends with the express-prom-bundle package with the path metrics feature enabled. When we didn't group static resources to a single path, the memory usage on the Prometheus server grew by 8x purely from the size of the indexes from all of the static assets being served out.
@AshokMishra can you confirm that you had indeed a single path of "/" on the Ingress object? Or, are you saying you had multiple paths on the ingress object and they all showed up as "/" in the metrics?
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev
on Kubernetes Slack.
One thing to mention is that the official dashboard for grafana shows the path and it's always /
, it's pretty confusing
While this would be a very nice to have feature, the project itself can not allocate any resources for it. As such there is no action for the project on this issue. if someone else wants to work on it, it will be a great value add.
Also needs to be mentioned that though this is desirable, the controller pods would die and never meet anyone's expectation of "NORMAL" if such a capability is added to the controller itself. For anyone able to extrapolate on the possible impact of this feature, imagine the resources like CPU/Memory/Networking/Storage required for having a metric count for every single path ever coming in on requests, ever, for the entire lifecycle of the controller pod and for the entire lifecycle of the prometheus instance. It will be too much by any standards but thosee seeking this could run their own fork of the code.
Since there is no action-item tracking in this issue, I will close it for now.
/close
@longwuyuan: Closing this issue.
What happened:
I am trying to create a dashboard based on which apis are called multiple time based on ingress controller metric but it seems the path label in metric is always set to "/". However, the logs have the correct value.
What you expected to happen:
I expect path value to be correctly populated for every request.
Example:
nginx_ingress_controller_response_size_bucket{app_kubernetes_io_component="controller", app_kubernetes_io_instance="colossus", app_kubernetes_io_name="ingress-nginx", controller_class="k8s.io/ingress-nginx", controller_namespace="ingress-controller", controller_pod="colossus-ingress-nginx-controller-7dcc75f798-7l6s6", exported_namespace="colossus-awx-operator", host="awx-portal1.colossus-staging.nvidia.com", ingress="portal1-awx-ingress", instance=":10254", job="kubernetes-pods", le="+Inf", method="GET", namespace="ingress-controller", path="/", pod="colossus-ingress-nginx-controller-7dcc75f798-7l6s6", pod_template_hash="7dcc75f798", service="portal1-service", status="200"}
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
): 1.22.8Environment: Production
Cloud provider or hardware configuration: In-house K8s cluster
OS (e.g. from /etc/os-release): Ubuntu 22.04
Kernel (e.g.
uname -a
):Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
Anything else we need to know: