Open hinimix opened 4 years ago
If the adapter is installed and you use it to serve the resource metrics API, then you don't need metrics-server in your cluster.
Why is the adapter still preferred over metrics-server?
After having continual issues with adapter, last year I switched back to metrics-server and haven't had any issues since.
Given that heapster is long deprecated and metrics-server now apparently does everything prometheus needs --and given that its the "official" default metrics provider in k8s for some time -- doesn't it make sense to deprecate use of the 3rd party adapter?
They're not in conflict. If you are running Prometheus and are collecting container level metrics, then it's unnecessary to run metrics-server as well as it would collect the exact same information as Prometheus already does. So if you are using Prometheus, it doesn't make sense to run metrics-server, but if you are not using Prometheus then metrics-server makes sense. It's just about not re-inventing the wheel and not using up unnecessary resources with metrics-server, and since kube-prometheus is all about running Prometheus on kubernetes, it's a natural choice to use the prometheus adapter :)
@brancz
First of all thank you for putting in so much time and effort on this project and keeping up with issues and taking the time to respond. Please take my comments below in the best light possible - I couldn't live without the hard work that has been put into this project. I apologize if this sounds ranty -- I hope this is taken as a respectful contribution of ideas and not a criticism. I comment here as a real production cluster operator (in a conservative industry) and not a developer. However---
I disagree that it is the natural choice to use prometheus adapter. In fact I would argue the opposite.
The problem here is that it replaces metrics-server. Metrics-server didn't always exist --there was a time when it was necessary to install the adapter to get the right metrics at all. And this was great at the time.
But this hasn't been the case for awhile. Metrics-server is a first class citizen in the kubernetes project, and it's necessary for certain core k8s functionality to work aside from prometheus. eg HorizontalPodAutoscaler, kubectl top, k8s dashboard etc. As a first class member, it's taken into account and supported in kubernetes releases. It also happens to work perfectly with kube prometheus - so no need for the adapter. This is exactly what we do actually -- I remove adapter components when I deploy kube-prometheus.
On the other hand the adapter is a third party project, and this is fine, but nobody is going to take it into consideration in k8s releases regardless of it initially setting an example of the correct path forward. So it's already at a disadvantage. If I'm not wrong it's up to the adapter maintainers to play catch-up. Here is the big problem with it taking over the core functionality of metrics-server, because now there's a risk that incompatibility will cause failure in critical functions unrelated to prometheus. In fact, without going into too much detail, I experienced several issues over time that I traced directly to the adapter itself, including:
For what it's worth, I've never had one problem with metrics-server. It appears rock-solid.
I'm not sure the importance of leaving metrics-server intact is taken seriously by kube-prometheus. It is good that at least the option to remain with metrics-server is configurable, just by going through and removing adapter prior to deploy- but I would argue that metrics-server should indeed be treated as the first choice by now barring some other need to change it out. With the introduction of metrics-server over a year ago, the adapter has been superseded for several versions of k8s and if not considered carefully, potentially adds unnecessary complexity and fragility to a k8s cluster that a monitoring tool shouldn't cause.
I realize this might sound like a rant against kube-prometheus so please don't take it that way. Kube-prometheus is awesome and super critical to have and the contributors and maintainers here are doing incredible work. Please keep it up! Thank you!
Thanks for the feedback, it's highly appreciated! It's not at all taken as a rant, I find it very constructive.
If people experience issues with the prometheus adapter then we'd be more than happy about reports of these. We run it in thousands of clusters without problems (of course any software including metrics-server has bugs, but we are not experiencing anything major with the latest versions).
For what it's worth the same people who maintain metrics-server maintain the prometheus adapter so they are equally under consideration for new Kubernetes releases. And again it's perfectly legitimate to prefer to use metrics-server, and people need to weigh off whether to use the adapter vs metrics-server for exactly the reasons you mention. kube-prometheus is intentionally an opinionated setup in many ways, and it chooses to use the prometheus adapter by default, but there is nothing wrong with preferring metrics-server (hence opinion :slightly_smiling_face: ).
@brancz
Thanks for the reply and for listening! I'm glad i didn't come off negatively.
Definitely considering that the developers are the same it makes even more sense to me to default to the "official" one, unless there was some requirement it doesnt meet? :) Since it has the full support of the k8s project, more contributors, more github stars and followers (for whatever that's worth).
metric-server is just an implementation of the API, it doesn't make it any more official that any of the other ones, in fact the only reason why the prometheus adapter is not in kubernets-sigs org yet, is because we want to fix some things with the custom metrics API, if it was just the resource metrics API, then we would have long moved it, making it equally "official".
kube-prometheus is not going to change its stance about this, but if you want to add it, I'd be happy about a note in the readme that metrics-server this is an opinionated decision and people may very well use metrics-server instead of the prometheus-adapter should they choose to.
Sure if you'd like me to put in a PR I would be willing to as well. Thank you for keeping an open mind on this.
Also would love to see the adapter added to the sigs - as a first class citizen in the ecosystem.
As a user (also same feedback from others), I am just bewildered why third party tool like prometheus adapter is able to provide custom metrics to used for HPA scaling consideration, whereas the metrics server (supposedly the first class citizen) is only able to provide cpu and memory metrics for scaling consideration. I wonder how was the history like that caused the birth of prometheus adapter instead for collaborators to continue building the metrics server to expand the capability.
@tonystaark metrics-server is meant to be a lightweight bootstrapper addon, prometheus is not
What happened? in this site
https://github.com/DirectXMan12/k8s-prometheus-adapter
i found that these wordsThis adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+. It can also replace the metrics server on clusters that already run Prometheus and collect the appropriate metrics.
it told me that prometheus adaptor can replace metrics serverbut i found that they had the same api services registered when i execute
kubectl apply -f metrics-server
, i can get result fromkubectl top nodes
when i executekubectl apply -f prometheus-adaptor-apisSrvices.yaml
i cannot get result fromkubectl top nodes
Did you expect to see some different? they had the same apiservice:
v1beta1.metrics.k8s.io
How to reproduce it (as minimally and precisely as possible): see the apiServices in metrics-server and manifest
Environment CentOS 7.7 kubernetes v1.16.2 docker 19.03
Prometheus Operator version: prometheus operator version: v0.34.0
Insert image tag or Git SHA here
Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets:
NewReplicaSet: prometheus-operator-99dccdc56 (1/1 replicas created)
Events:
`
Kubernetes version information:
kubectl version
[root@k8s-master-1 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes cluster kind: kubeadm
Manifests:
Anything else we need to know?: