Open vineel-concat opened 3 years ago
Is your app listening on port 9090
? It sounds like it's configured to listen on 8080
from your description, so you may just need to adjust the annotations in the HPA to have the right port/path.
My app is listening on port 8080. Sorry, I was thinking that 9090 is the port your service reads logs from. To provide more context about my application: It's a python flask app/end-point which accepts HTTP requests It's built on top of TensorFlow docker image. Can you please let me know what steps I need to follow in my case?
I have also tried the Prometheus queries based on the example provided in the link https://medium.com/google-cloud/kubernetes-autoscaling-with-istio-metrics-76442253a45a .
I am getting the following error:
level=error msg="Failed to get metrics from pod 'test/hello-app-69fd99d7c8-52qfg': unsuccessful response: 404 NOT FOUND" Collector=Pod
time="2021-01-13T16:49:11Z" level=info msg="Collected 0 new metric(s)" provider=hpa
time="2021-01-13T16:49:41Z" level=error msg="Failed to collect metrics: query 'sum(\n rate(\n istio_requests_total{\n destination_workload=\"hello-app\",\n destination_
workload_namespace=\"test\",\n reporter=\"destination\"\n }[1m]\n )\n) /\ncount(\n count(\n container_memory_usage_bytes{\n namespace=\"test\",\n pod_name=~\"hello-a
pp.*\"\n }\n ) by (pod_name)\n)\n' returned no samples" provider=hpa
You HPA above suggest you want to scale based on metrics exposed by the application itself on :9090/metrics
. If you want to scale based on Prometheus metrics then you need to configure it following this example: https://github.com/zalando-incubator/kube-metrics-adapter#example-external-metric
Note the blog post you refer to is based on the older format, so it's slightly different. See the link above to get the right configuration for Prometheus based scaling.
I have tried the example from https://github.com/zalando-incubator/kube-metrics-adapter#example-external-metric
seeing the following error:
msg="Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"test\", Name:\"hello-app-pro\", UID:\"681342ce-a7e8-4b8f-9603-9bc421fa63aa\", APIVersion:\"autoscaling/v2beta1\", ResourceVersion:\"1183645\", FieldPath:\"\"}): type: 'Warning' reason: 'CreateNewMetricsCollector' Failed to create new metrics collector: no plugin found for {External processed-events-per-second}"
Hello, and sorry for commenting on an old issue. But I am new to this k8s world and I'm just trying to get the HPA to work properly for the PHP application at my work. scaling based on CPU is simply too slow, and it will be scaled after all our users leave again because they have waited 10 seconds for a response.
How do I get this to run in our GKE instance? I know how k8s works in general and normally how to add an adaptor. But how do I add this adapter to an existing deployment? Which files in https://github.com/zalando-incubator/kube-metrics-adapter/tree/master/docs do I need to apply and in what order?
I'm sorry for these noob questions. I'm just trying to learn and understand. And learning k8s on GKE hasn't been the easiest with all these differences.
Our application runs on port 8000 when running only the php application docker image. But there is an nginx before the php app. The NodePort service routing from port 80 to 8080. (port: 8080, targetPort: 80) So should I put port 80, 8000, 8080, or 9000? which is the FPM container port.
Sorry but I'm a bit confused by all of this. ' This is my response when I run kubectl get hpa. I have tried these ports: 9000, 8000, 8080. I will try with port 9000 next since that is the container port for the FPM container. I have also tried forwarding /metrics to the front so I could see what it tries to scrape. But I can't get that to work either.
I have just ran the same logs command, and I get a similar error message. I am also using a load balancer to forward port 80 to 8080. So I have just updated the custom metrics HPA to use that port instead.
@mikkeloscar You are saying that the HPA he has provided in the original issue. Suggests that he wants to scale based on metrics exposed by the application itself on /metrics .
Isn't his example the example from the readme? I know our application isn't running a /metrics by itself unless we do something. But how do we get these metrics to run on /metrics and if it isn't supposed to run on the application, what port should we put instead?
I am running a GKE cluster with an sql-proxy, nginx and fpm container in a deployment. the nginx containerPort=80 and fpm containerPort=9000. The php web application runs on port 8000 when I run the docker container outside k8s. But the NodePort service/Load balancer looks like this:
apiVersion: v1
kind: Service
metadata:
name: internal-service-prod
namespace: default
spec:
type: NodePort
selector:
name: my-api-prod # Match the name of the container so k8s knows which pods to route traffic to
ports:
- port: 8080
targetPort: 80 # Math container port from deployment.yaml
I am trying to create a custom metric based auto-scaling based on requests per second on GKE.
Installed the kube-metrics-adapter by applying https://github.com/zalando-incubator/kube-metrics-adapter/tree/master/docs yaml files.
On GKE cluster, enabled istio and installed Prometheus using
After installing, created an HPA based on the example provided in the readme
Exposed application using load balancer by port forwarding from port 80 to container 8080 port.
In cloud console, it shows an error that "HPA cannot read metric value"
Checked the error log by running
kubectl -n kube-system logs deployment/kube-metrics-adapter
Seeing the following error:Thanks