grafana / agent

Vendor-neutral programmable observability pipelines.
https://grafana.com/docs/agent/
Apache License 2.0
1.59k stars 486 forks source link

Proposal: DaemonSet-like mode for Grafana Agent Operator #1495

Open rfratto opened 2 years ago

rfratto commented 2 years ago

Grafana Agent Operator currently requires deploying multiple sets of agents:

The specific resources deployed by the operator is ideally an implementation detail to the users, but it's still not ideal that we need to do this. One side effect of the current implementation is that the requests/limits you assign to the GrafanaAgent resource are shared with all deployments of the agent. This is redundant (not all pods need the same requests/limits) and duplicative (the total resource requests are your requests * the number of pods the operator determines it needs to deploy).

I propose that Grafana Agent Operator supports a "DaemonSet-like" mode, where it manages one pod per node handling all telemetry, including integrations. We should use a "DaemonSet-like" controller to allow PVCs to be created per pod, as real DaemonSets don't support this.

As-is, this proposal isn't ready for work, and has at least a few dependencies:

Despite it not being ready, I'm opening this as a proposal now because:

aengusrooneygrafana commented 2 years ago

adding 👀 for @grafana/solutions-engineering

mrmartan commented 2 years ago

I am currently implementing deployment of a fleet of Grafana Agents on company Kubernetes clusters using standalone/manual deployments (e.g. the Grafana Cloud provided K8s integration) and resources deployed by the agent operator. Deploying and configuring all the agents for all three observability pillars with reliability under load and at scale is anything but straightforward. There are many things one has to know. I know of some, e.g. sharding of metrics agents, load-balancing of traces agents. Even with the operator. There are many more instances where I don't know what I don't know yet.

I am wholly behind the idea presented here. It does not matter whether the implementation is DeamonSet-like or anything else. I don't think you have to restrict to yourself the notion of 'agent per node'. You can't know how big each node is and whether you can vertically scale one agent instance to handle all load. That said, the operator could integrate with cadvisor and kube-state-metrics or similar and use data from them to scale agents accordingly.

The specific resources deployed by the operator is ideally an implementation detail to the users

That should be true but I don't feel it is. I have to understand what is happening under the hood to be able to scale.

Ideally I wouldn't want to deal with GrafanaAgent and LogsInstance/MetricsInstance at all. I want to define monitors, PodMonitor, ServiceMonitor, LogsMonitor, TraceMonitor and that's it. Although I might be asking too much here 😄

rfratto commented 2 years ago

This proposal would be superseded by #1565.

james-callahan commented 1 year ago

An application I'd like this for is being able to scrape the local kubelet metrics on each node. Currently you need your MetricsInstance to be able to reach the host network of all nodes, which can be problematic in certain environments.

Potentially solving this issue would let you deprecate/remove the kubelet service thing.

Note that you should be able to specify for a given DaemonSet if you want hostNetwork.