Open chirangaalwis opened 3 years ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
IMO this is a valid requirement which needs addressing.
Folks, any update on this?
Hi @chirangaalwis
I'm with my head a bit under other stuff and couldn't look into this.
If possible, can you provide some implementation PR for us? We can help you on this. Otherwise, you will rely on the availability of some of us, and talking from my side, I'm really on a rush.
Anyway I will add this as something for milestone v1.2.0 and see what we can do, ok?
We currently have HPA and KEDA related configurations, so this issue mainly wants to add some related metrics, right?
Hi @chirangaalwis
I'm with my head a bit under other stuff and couldn't look into this.
If possible, can you provide some implementation PR for us? We can help you on this. Otherwise, you will rely on the availability of some of us, and talking from my side, I'm really on a rush.
Anyway I will add this as something for milestone v1.2.0 and see what we can do, ok?
@rikatz ack.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/triage accepted /priority longterm-important
@iamNoah1: The label(s) priority/longterm-important
cannot be applied, because the repository doesn't have them.
/priority important-longterm
We currently have HPA and KEDA related configurations, so this issue mainly wants to add some related metrics, right?
Anyone knows if this may help? :D
/help
@iamNoah1: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
Any updates on this? While reading the walkthrough I thought it was supported OOTB what I learned now is not the case :(
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This functionality would be so useful... because configuring HPA using CPU/Memory is not accurate as sometimes there are enough resources but worker_connections is reached :(
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
What would you like to be added:
A custom metrics API implementation for capturing the request count per unit time of a Kubernetes Ingress resource object, when using the community NGINX Ingress Controller in order to be used as a metric for Horizontal Pod Autoscaling (HPA).
The ultimate goal is to be able to add a request per unit time metric to Horizontal Pod Autoscaler as described in the walk through, when using the community NGINX Ingress Controller implementation.
An example similar to the suggested solution can be found at Skipper collector with the kube-metrics-adapter. This particular solution works when using the skipper Ingress Controller implementation.
Why is this needed:
The accuracy of request per unit time, as a metric for HPA is considered to be very high. Especially, when working with container based deployments of language runtimes involving garbage collection. Please see this #sig-autoscaling Slack channel discussion for a details about this topic.
Community NGINX Ingress Controller being one of the most widely used implementations.
Notes:
Original requests made at #sig-autoscaling and #ingress-nginx-users Slack channels.
Discussion on accuracy of metrics for HPA
Original issue in Kubernetes main Git Repository
Suggested Assignees:
@rikatz @strongjz