Closed Tiremisu closed 1 month ago
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
Can we add more compact conditons like timeout? Will compact delete whole file or only compact some of the file?
Could you tell more detail about how the compact works and when to trigger the file_storage to compact.
I'm not sure I can explain it more accurately than the documentation but here's the simplified mental model that I use to reason about this.
We basically need to look at the file as a reservoir for data. If we mostly fill up the reservoir, then it will automatically expand. However, if we drain the reservoir, it will not automatically shrink. We can use compaction to shrink the reservoir, but it must remain at least as large as its contents. So typically we need to wait until contents of the reservoir have been substantially drained. Then we can actually shrink it by a meaningful amount.
This is why we must meet two separate conditions before we run compaction.
rebound_needed_threshold_mib
)rebound_trigger_threshold_mib
)Your graph showing used bytes may only be showing the size of the reservoir. Typically when compaction does not occur, it is because the contents are not draining as quickly as you expect.
Can we add more compact conditons like timeout?
The problem is that a timeout does not take into account whether or not the contents of the reservoir have been drained. There may be some value here because you could in theory reclaim some space, but it may often be a lot less than you'd expect, and you'll have spent a non-trivial amount of compute for it.
Will compact delete whole file or only compact some of the file?
Compaction will create a complete copy of the file, edit the copy, and then finally overwrite the old file with the new.
@djaglowski Thanks for your explanation. For the matter if reservoir drain or not. The log don't or no evidence can tell us whether it reach the compact condition, we are so confusing about that and can't find the pattern. But I learn to infer if data drain or not by calculating the metric (otelcol_receiver_accepted_log_records-otelcol_exporter_sent_log_records). So we found that even all data drain, the file don't compact. My situation using the filestorage is we use it as the exporter persistent queue. So my concern is if filestorage full will that:
1 impact the sending queue to persistent queue. As the behavior show, some of these still can accept/send data. but
2 even it can accept/send data, how about its performance? It can't use the file(10GB) to buffer anymore, right? The data only can buffer in memory(Max 2GB*75%).
3 Could we add more condition to the compact condition eg. sending_queue size? The existing one is too underneath for us and not clear. sending_queue drop to 0 means data already been send, or the metric(accepted-send) I mention.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
I have not had time to look into this further. If anyone is able to provide a unit test that demonstrates the failure, this would make solving it much easier.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
Component(s)
exporter/sumologic, extension/storage/filestorage
What happened?
Description
Note more and more pv of otel collector exporter/sumo/file_storage getting full(10GB) and will not compact. Over time there are less and less pods can provide service because the sending_queue is full. Even recreate the pod will not trigger compact.
Steps to Reproduce
Guessing a lot of logs/entities are flushed to the same collectors whose exporter is using file_storage and last for 2 hours.
Architecture A daemonset of otel collector responsible for collect docker logs -> a statefulset of otel collector to buffer before sending to -> storage backend.
Expected Result
Workflow: Daesmonset pod exporter -> (k8s) service -> receiver of Statefuleset pod
Actual Result
Only some of pod will auto-compact the file_storage, just as the chart shows. Those with pv full can't not compact, looks like stop working, even after recreating pod.
Collector version
0.74.0
Environment information
Environment
Kubernetes 1.23.9
OpenTelemetry Collector configuration
statefulset otel configurations
Additional context
No response