Open VijayPatil872 opened 3 days ago
Pinging code owners:
receiver/kafka: @pavolloffay @MovieStoreGuy
See Adding Labels via Comments if you do not have permissions to add labels yourself.
You need a way to apply backpressure. The memory_limiter processor is in charge of checking memory usage and applying such backpressure. I see it is disabled in your pipeline.
@atoulme As mentioned to make use of the the memory_limiter processor in the pipelines, we tested this scenario. It is observed that with enabling memory_limiter processor, when backend is down then exporters starts dropping data. after sometime the collector pod memory is vacated and then it starts pulling the events from the queue. this is not fulfilling the expectations on memory limiter. Do you think another approach can be tried here.
As seen in the screenshot for sometime the acceptance rate goes down, but then it again starts to pull metrics. Also we observed the pull rate of metrics from kafka is way more than ingested rate.
Component(s)
receiver/kafka
What happened?
Description
The Open Telemetry Pods pull events from the Azure EventHub Queue and export events to the backends. If these backends are not available to accept data, then open telemetry collector keeps pulling events from Azure EventHub with
Kafka receiver
and otel pods keep these events in its in-memory queue, which gradually fills up all the memory capacity of the otel pods. hence otel pods gets out of resources and then starts dropping the events. For this reason, we can't make right use ofKafka/EventHub
as a data loss protection queue. To avoid the issue of stop pulling more events from the Azure EventHub queue when in-memory queue or otel resources get insufficient, is there any components will help in theses situation, or we can have any solution on this issue will be helpful.Steps to Reproduce
Expected Result
Open telemetry collector should stop Pulling New Events from EventHub Queue when Open telemetry collector get out of resources
Actual Result
Otel pods keeps pulling events from the Azure EventHub Queue though Open telemetry collector get out of resources
Collector version
0.104
Environment information
Environment
OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response