Open sll552 opened 3 months ago
I agree that this is not desirable behavior and should work that way.
While I look into this, there are ways to approach this via filtering with the deployment approach with service monitor relabeling.
https://github.com/jmcgrath207/k8s-ephemeral-storage-metrics/blob/master/chart/values.yaml#L13
Daemonset service is in need of some rework to query only it's node. I want to do it, but it's going to take awhile.
The main reason for using the DaemonSet approach was that it provides at least some sort of "load distribution" for the API server and because it's a little more resilient to failures (e.g. you only loose metrics for one node if a pod dies).
By looking at the code it seems like it is already implemented that each DaemonSet Pod only scrapes it's own node. So it should be bug.
Maybe, and that is just a wild guess from looking at the code, it has to do with the Node informer that is being created. I didn't see any other place where Nodes get added to the Node.Set
so it could be that the informer adds all nodes after the initialization that happens at https://github.com/jmcgrath207/k8s-ephemeral-storage-metrics/blob/master/pkg/node/k8s.go#L37
Also imho the informer is not needed when running as DaemonSet as Node scaling also leads to DaemonSet scaling and therefore it should be ok to not call Node.Watch()
at https://github.com/jmcgrath207/k8s-ephemeral-storage-metrics/blob/master/cmd/app/main.go#L124 if it's a DaemonSet deployment.
Hi,
if I understand the intention of the different deployment types correctly, when using the DaemonSet each instance should only scrape the node it's running on. However when scraping those instances each instance provides metrics for all nodes. Also when looking at the debug logs I can see entries for all nodes (this is a 3 node cluster, ignore the numbering) being scraped e.g. (filtered for
proxy
):Here is the yaml of the Pod that produced those logs: