Open dmitryax opened 1 year ago
Some feedback about the interface:
Thanks for the feedback!
Can you make the old NewMetricsExporter to call into NewMetricsRequestExporter to remove duplicate code and to actually start testing the new path more intensively? Also removes lots of duplicate tests.
That's what I wanted to do after https://github.com/open-telemetry/opentelemetry-collector/pull/8248 is merged 👍
Thanks for the work on this processor, I'm looking forward to seeing the results. I wanted to ask, if including the batching inside the exporter helper would also solve the issue of having a retry and sending queue per incoming context?
Currently, back-pressure is applied uniformly upstream of the pipeline. This means that if the endpoint relevant for one context is down, it can affect telemetry delivery for other pipelines. Will the new design also implement queue senders / retry senders per metadata labelset, similar to the way in which the current batch processor is sharded? Does each shard of the batch sender in the new design have its own queue and retry loop?
Edit: added a diagram of what I mean. Each dotted line represent telemetry with a unique set of metadata attributes. The question is if this is a correct understanding, that there will be a queue sender and retry sender per batching shard?
@verejoel, the batch sharding will be added for parity with the batch processor. But there are no plans to have separate sender pipelines. The sharding will ensure that one batch won't have data with different metadata values but the batches will be processed by one pipeline.
To properly support multi-tenant pipeline, having a sender queue per batch will be critical. Otherwise, we can still have batches from tenants with back-pressure blocking the pipeline for all tenants attempting to ship data.
Can we work in sharded queues as part of this feature? Or should I write it down as a separate feature request?
@dmitryax Thanks for the batch sender! I've been experimenting with it lately.
In my experiments, the batch sender is producing inconsistent batch sizes which could be lower than desired due to goroutine scheduling even after https://github.com/open-telemetry/opentelemetry-collector/pull/9761 . The scenario I usually run into is that given queue sender concurrency = batch sender concurrency limit = N, and they are all blocked on send
, when the send
eventually returns, activeRequest will first be set to N-1, then a new consumer goroutine comes in and increments the active request, then realize N-1+1 == concurrencyLimit, and will send off the request right away, causing an undesirably small request to be exported without batching.
I tried to fix it by resetting bs.activeRequests
to 0 next to close(b.done)
. While it fixes for sendMergeBatch
, it will not work for sendMergeSplitBatch
since it may be exporting something outside of activeBatch. I assume there might be a way around this for merge split batch, but I'm not sure if it is worth the trouble and complexity to workaround unfavorable goroutine scheduling.
On the performance side of things, I haven't benchmarked it, but I'm not sure how performant it is when queue consumer concurrency is high (e.g. 1000+) given the number of running goroutines. I understand that spawning goroutines and blocking them may be the only way to avoid deleting the entry in persistent queue before request export.
I wish I could give some useful suggestions here, but I don't have a good idea within the current paradigm. Have you considered the limitations I mentioned, and are there bigger changes planned ahead?
edit: I'm starting to think, will it be fine if the batch sender waits up to FlushTimeout even when concurrency limit is reached when not all goroutines are in the same batch?
@dmitryax
edit: I'm starting to think, will it be fine if the batch sender waits up to FlushTimeout even when concurrency limit is reached when not all goroutines are in the same batch?
I've drafted a PR to do so, and feedback is welcome: https://github.com/open-telemetry/opentelemetry-collector/pull/9891 We can discuss the desired behavior in the PR.
The issue is captured in https://github.com/open-telemetry/opentelemetry-collector/issues/9952
Optional counting of queue and batch sizes in bytes of serialized data.
I see that batching based on size rather than item count is planned but apparently not implemented yet. Is that still on the roadmap?
The counting of batch sizes in bytes is something that would be super useful for us, since some of our backends limit us on payload size in bytes. @dmitryax let me know if there's anything I can do to help push that forward, would be happy to help out.
As discussed in today's Collector SIG meeting, we (Elastic) have the same requirement as @verejoel described in https://github.com/open-telemetry/opentelemetry-collector/issues/8122#issuecomment-2020756808.
We intend to support a multi-tenant collector that receives data from different tenants, and sends the data to a tenant-specific Elasticsearch cluster. For performance reasons we should batch the data before exporting to Elasticsearch, which must necessarily be done per tenant.
Although partitioning may be done by the batch sender (https://github.com/open-telemetry/opentelemetry-collector/issues/10825), each backend instance (in our case, Elasticsearch cluster) may have different performance (different load, different scaling, ...) or availability; with a single queue, this may lead to head-of-line blocking.
This is a tracking issue for introducing the new exporter helper and migrating the existing exporters to use it.
The primary reason for introducing the new exporter helper is to move the batching to the exporter side and deprecate the batch processor as part of making the delivery pipeline reliable, as reported in https://github.com/open-telemetry/opentelemetry-collector/issues/7460. More details about moving batching to the exporter helper can be found in https://github.com/open-telemetry/opentelemetry-collector/issues/4646.
Shifting batching to the exporter side grants us the opportunity to leverage the exporter's data model instead of relying on OTLP. As a result, we can achieve the following benefits:
Adapting to the new exporter helper requires exporter developers to implement at least two functions:
Design document: https://docs.google.com/document/d/1uxnn5rMHhCBLP1s8K0Pg_1mAs4gCeny8OWaYvWcuibs
Essential sub-issues to mark this as complete:
Additional sub-issues to get feature parity with the batch processor: