Open skhalash opened 3 months ago
That sounds interesting. A similar approach is used in Metricbeat for covering the cluster level metrics collection when running as part of a Daemonset.
II assume we want to achieve something similar here where the Collector will be running only as a Daemonset and the leader will be responsible for enabling the k8sclusterreceiver
and k8sobjectsreceiver
?
While this is useful from user experience perspective, from my past experience, it can be problematic when it comes to scale. When you deploy the Collector as a Daemonset you set the resource limits according to the Pod's needs. However one of the Pods will be the leader and hence would require extra resources. In order to support this you need to increase the resource requests/limits for the whole Daemonset, but not all of the Pods will actually need those resources. This can be confusing. In addition, such a feature might also affect the load that the Collectors put to the K8s API. I have seen such issues in the past but I don't have something specific to share here. So in summary, such a feature would need to explicitly document the pros/cons etc (and maybe be tested accordingly) and properly set the expectations for the users.
Hey @ChrsMark! Thanks for the feedback. Yes, exactly the leader is responsible for enabling a sub-receiver (such as k8sclusterreceiver
and k8sobjectsreceiver
) if the collector is running as a DaemonSet or a Deployment.
Yes, I fully agree with the concerns about resource limits/requests. However, as you said, it should be properly documented. Not sure if there is a way to workaround it.
Regarding putting some extra load on the k8s API - do you mean querying/updating leases?
Regarding putting some extra load on the k8s API - do you mean querying/updating leases?
@skhalash yes, but this is something that can also be properly documented along with some perf tests results so as users be aware of any possible impact to their clusters.
Why does it have to be a separate receiver? I think this should be an extension providing an interface that any receiver can connect to to check if it's a leader (have to do the work) or not (do nothing). I would be happy to sponsor and review that
Hey @dmitryax! Thanks so much for your response! We’d be happy to contribute such a component. I don’t have much experience with extensions—would this mean that every receiver needing this functionality would require code modifications? Implementing it as a delegating receiver, similar to receiver-creator, could avoid it.
Modifying code is fine - better than having two similar receivers. It's better to write some code, but keep the user interface clean. Restricting it to receiver_creator, we won't be able to use leader election for receivers that don't work with receiver creator, for example https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/k8seventsreceiver, or don't need to use it all, e.g. collecting data from a static endpoint. Also, future of receiver_creator is unclear. We might take a different approach to discovery and consolidate all scraping receivers in one, see https://github.com/open-telemetry/opentelemetry-collector/issues/11238
Thanks for the response! We'll explore the idea of implementing leader election as an extension instead of a receiver and will get back to you.
Hey @dmitryax,
We're exploring a specific use case and wanted to get your thoughts. The scenario involves multiple receivers within a single running collector instance that may need to operate in singleton mode, using leader election. To support this, each receiver might need its own lease, allowing each to be managed by a different leader.
We're considering two possible approaches:
receivers:
k8s_cluster:
singleton:
lease_name: foo
lease_namespace: default
k8s_events:
singleton:
lease_name: bar
lease_namespace: default
extensions:
singleton:
singleton/foo
and singleton/bar
, each referenced by its respective receiver:receivers:
k8s_cluster:
singleton:
name: singleton/foo
k8s_events:
singleton:
name: singleton/bar
extensions:
singleton/foo:
lease_name: foo
lease_namespace: default
singleton/bar:
lease_name: bar
lease_namespace: default
The second approach seems simpler to implement. However, from what I’ve seen, there aren’t any examples in the wild of multiple instances of extensions being registered. Do you think this approach would be feasible?
This is a very interesting feature, and many of the plugins in otel col contrib are currently restricted to running on a single instance.
We did some investigation and agree that implementing leader election as an extension is indeed a cleaner solution. We will provide a PR with the implementation soon.
The purpose and use-cases of the new component
A receiver creator that can wrap an arbitrary sub-receiver and ensure that only one instance of this sub-receiver is active at a time in a high-availability OTel Collector setup. This setup is useful in a situation where there are multiple collector replicas running, but only one of them is producing telemetry (metrics, data, logs) at a time. The rest of the replicas are not active (standby mode). This mechanism is implemented using leader election.
Example configuration for the component
Telemetry data types supported
traces, metrics, and logs
Is this a vendor-specific component?
Code Owner(s)
@skhalash @a-thaler
Sponsor (optional)
No response
Additional context
I work at SAP on a project called Kyma: https://kyma-project.io/#/. In Kyma, we recently developed such a receiver and we are already testing it out in a production setup: https://github.com/kyma-project/opentelemetry-collector-components/tree/main/receiver/singletonreceivercreator.
We’ve noticed discussions in the community about introducing leader election in the k8sobjects and k8scluster receivers. That's why we believe that a generic mechanism could be quite beneficial. If there’s interest, we are ready to contribute it to the Open Telemetry project.