Open bentrombley opened 4 years ago
Sorry, I'd submit a pull request, but that would require a lengthy approval process from my employer.
Hi @bentrombley there should be a default role in the cluster called: extension-apiserver-authentication-reader
This we bind to here: https://github.com/zalando-incubator/kube-metrics-adapter/blob/97ec13d010bd58af766289932381e957ab47de30/docs/rbac.yaml#L116
Does your cluster not have this role as default? From what I can see it should also be in v1.15.7
I experienced same issue after I added that line working perfectly.
Same here, adding this section to the clusterrole/custom-metrics-resource-collector
made the metrics work on my k8s (1.18.3).
What confuses me is that I installed a cluster a month ago (1.18.2), without that part, and it works... :thinking:
Hello, I came up with this fix also, but I'm still getting the following logs.
I0624 19:40:42.941182 1 serving.go:306] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
W0624 19:40:43.437556 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0624 19:40:43.437629 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
In debug mode I can see the configmap being read, though.
Can you check if you have the role extension-apiserver-authentication-reader
in your cluster?
I have just created a new cluster based on v1.18.6 and it's there by default:
kubectl --namespace kube-system get role extension-apiserver-authentication-reader
NAME CREATED AT
extension-apiserver-authentication-reader 2020-07-21T19:14:01Z
Or could it be that you're not deploying the kube-metrics-adapter to kube-system
namespace but to another namespace?
@prageethw, @edsonmarquezani, @bentrombley
@mikkeloscar
yes default role exist in kube-system
namespace, it seems the issue is with custom-metrics-resource-reader
and custom-metrics-resource-collector
that gets created in the namespace I install the adopter it seems.
@prageethw Is installing to kube-system
an option for you. Then we could change the docs to include the namespace for all resources. If not then we need to adapt it for users not installing to kube-system where the extension-apiserver-authentication-reader
is no available. But this also means the configmap is not available and thus I would expect some things to not work as the configMap includes the CA of the apiserver.
@mikkeloscar Thanks for the reply, yeah installing in kube-system
is always an option though not sure whether that will be the best practice, generally, I like to keep Kube-system
untouched, though I guess everyone is different. What stop are we giving enough access rights to custom-metrics-resource-reader
and custom-metrics-resource-collector
instead of forcing users to install to Kube-system
?
What stop are we giving enough access rights to
custom-metrics-resource-reader
andcustom-metrics-resource-collector
instead of forcing users to install toKube-system
?
What I want to avoid is that we document more access than is needed. With the change suggested in this thread and in #181 the kube-metrics-adapter would get access to ALL configmaps instead of just a single one as intended.
@mikkeloscar fair point, I think TBH that is an extreme measure if someone can access to custom-metrics-resource-collector
and custom-metrics-resource-reader
, most likely he is already inside the cluster.
@mikkeloscar fair point, I think TBH that is an extreme measure if someone can access to custom-metrics-resource-collector and custom-metrics-resource-reader, most likely he is already inside the cluster.
I want to document the best practice which is the least amount of permissions. If users need something custom or more relaxed they're free to use a custom role setup. I'm also fine documenting that if we clearly state the reason (e.g. not deploying in kube-system
) However considering that the original issue clearly states a problem when the service account is in kube-system
and that we also only document kube-system
as the default setup right now, then I suspect something else is wrong if it doesn't work for folks without these extra changes.
Does anyone in this thread deploy to kube-system
and still have the problem with the default RBAC roles we have documented?
I prefer to have all monitoring related stuff in the monitoring namespace. I have moved the adapter for now though because it seems the only way to get this to work is have it in kube-system since even giving it the right RBAC would mean it needs a clusterwide priv instead of a single cm priv as documented here.
We just upgraded from 0.1.0 to 0.1.3 and started seeing errors in our logs like:
I'm not sure what changed, but adding this
apiGroups
section to the rules forcustom-metrics-resource-collector
fixed it for us:Expected Behavior
There should be no errors/warnings in the logs.
Actual Behavior
See above logs. This caused the HorizontalPodAutoscalers to fail.
Steps to Reproduce the Problem
docs/
folder.kube-metrics-adapter
pod.Specifications