Closed mleneveut closed 5 years ago
same here {"level":"info","msg":"Listening on 0.0.0.0:8080","time":"2019-03-04T13:36:18Z"} {"level":"info","msg":"Creating InCluster config to communicate with Kubernetes master","time":"2019-03-04T13:36:18Z"} panic: the server could not find the requested resource (get pods.metrics.k8s.io)
goroutine 7 [running]: github.com/google-cloud-tools/kube-eagle/vendor/github.com/weeco/kube-eagle/pkg/metrics_store.Collect() /go/src/github.com/google-cloud-tools/kube-eagle/vendor/github.com/weeco/kube-eagle/pkg/metrics_store/metrics_store.go:59 +0x386 main.main.func1() /go/src/github.com/google-cloud-tools/kube-eagle/main.go:56 +0x32 created by main.main /go/src/github.com/google-cloud-tools/kube-eagle/main.go:52 +0x47
Does this work for you?
kubectl get --raw /apis/metrics.k8s.io/v1beta1
If not, you need to set up metrics-server
.
Seems to yes :
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
Same issue here.
Here's metrics-server
(0.3.1 version) logs:
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.828598 1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/tiller-deploy-dbb85cb99-t5djw: no metrics known for pod
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.828611 1 reststorage.go:144] unable to fetch pod metrics for pod default/registry-docker-registry-84865c85c4-9w8ng: no metrics known for pod
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.828615 1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kubernetes-dashboard-cb6749dc6-94rqq: no metrics known for pod
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.828618 1 reststorage.go:144] unable to fetch pod metrics for pod eagle/kube-eagle-8654c7f9c4-hlj89: no metrics known for pod
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.828622 1 reststorage.go:144] unable to fetch pod metrics for pod eagle/prometheus-node-exporter-2jz94: no metrics known for pod
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.828626 1 reststorage.go:144] unable to fetch pod metrics for pod eagle/prometheus-pushgateway-75c4db7866-nhz88: no metrics known for pod
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.927754 1 reststorage.go:129] unable to fetch node metrics for node "admiring-matsumoto-u187": no metrics known for node
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.927777 1 reststorage.go:129] unable to fetch node metrics for node "admiring-matsumoto-u1nt": no metrics known for node
metrics-server-5b5bfd85cf-dzml5 metrics-server E0304 23:30:52.927782 1 reststorage.go:129] unable to fetch node metrics for node "admiring-matsumoto-u1n6": no metrics known for node
EDIT: I think my issue is related to https://github.com/kubernetes-incubator/metrics-server/issues/143 UPDATE: I solved above issue here https://github.com/kubernetes-incubator/metrics-server/issues/143#issuecomment-469480247.
BTW kube-eagle
is not working 100% yet because when I I do port-forward kubectl port-forward kube-eagle 8080:8080
I still get an 404 error
for /
path and also collectors aren't working (related https://github.com/google-cloud-tools/kube-eagle/issues/8).
@alfonmga Thanks for sharing your solution.
@mleneveut On /
there shouldn't be a response. The path to the metrics endpoint is /metrics
. Did you use the correct path for the metrics endpoint? If you used the helm chart for deployment you should already have the right pod annotations for your prometheus instance.
I updated to 1.1.0.
I can see metrics on the /metrics endpoint (the doc says to port-forward and check / and /health, you should change that to /metrics)
eagle_node_resource_allocatable_cpu_cores{node="aks-nodepool1-14107490-0"} 2.0
eagle_node_resource_allocatable_cpu_cores{node="aks-nodepool1-14107490-1"} 2.0
eagle_node_resource_allocatable_cpu_cores{node="aks-nodepool1-14107490-3"} 2.0
I added the 2 flags to my metrics-server:0.2.1 (I didn't have the cert errors)
I don't see the metrics in my Prometheus, only on the /metrics endpoint of kube-eagle.
Oh I see that's part of the helm chart output. I will update it. Thanks for the heads up. I believe your issue is solved now, right?
Unfortunately no. I can see the metrics on the kube-eagle/metrics, but in my Prometheus, I don't have any eagle* metrics.
Do I have to do any configuration in my Prometheus so that it scrapes the kube-eagle ? I use the stable/prometheus-operator helm chart. The kube-eagle is deployed in the same namespace as the prometheus.
I am not familiar with your prometheus configuration, but if you are running the prometheus operator you might need to add the service monitor as described in the helm chart PR here: https://github.com/google-cloud-tools/kube-eagle-helm-chart/pull/1
Is this issue resolved ?, i still get 404 in the latest helm chart deployment. Is there any work around ?
@karthik101 can you have a look at https://medium.com/oracledevs/kube-eagle-on-oracle-kubernetes-engine-f2c8a3730565
@saiyam1814 Thanks, next time i better read the description first :D .
re-edit your comment with this : https://medium.com/oracledevs/kube-eagle-on-oracle-kubernetes-engine-f2c8a3730565
Can you re-arrange the commands as they were joined into the sentences.
Yeah my bad I directly copied from my medium post on my mobilr and pasted , and it turned out like this
what was the resolution? I just deployed kube-eagle on my k8s and I encounter the same issue. metric server is deployed and when directly accessing the /metrics endpoint it gives me the data, but it is not scraped automatically. Nothing relevant in the metric server logs.
Do you have the "eagle_*" label back ?
I created the kube-eagle with your helm chart.
I have a Prometheus operator created with stable/prometheus-operator chart.
The pod logs :
When I port-forward : kubectl port-forward kube-eagle-69c44869d7-qw7sr 8080:8080
http://localhost:8080 => error 404 http://localhost:8080/health => HTTP 200, text is "Ok"
When I look into my Prometheus, I don't have any metric labeled "eagle_*"
Do I have to add some target in my Prometheus to scrape the kube-eagle pod ?