Open weiwu-sre opened 3 months ago
Hi @weiwu-sre
The EntityTooLarge
error seems like a limitation of your S3 storage backend, see https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList
However, you can build smaller blocks by specifying these options in the limits_config
(can also be set per-tenant):
# Experimental. The maximum bloom block size. A value of 0 sets an unlimited
# size. Default is 200MB. The actual block size might exceed this limit since
# blooms will be added to blocks until the block exceeds the maximum block size.
# CLI flag: -bloom-compactor.max-block-size
[bloom_compactor_max_block_size: <int> | default = 200MB]
# Experimental. The maximum bloom size per log stream. A log stream whose
# generated bloom filter exceeds this size will be discarded. A value of 0 sets
# an unlimited size. Default is 128MB.
# CLI flag: -bloom-compactor.max-bloom-size
[bloom_compactor_max_bloom_size: <int> | default = 128MB]
128MB for blooms and 200MB for blocks are default values. These were introduced at some point after 3.0.0 though. If you upgrade to the lastest version, you should not see such big blocks any more.
I am running grafana/loki:3.1.0
BTW.
And I can see those default values from the help command.
-bloom-compactor.max-block-size value
Experimental. The maximum bloom block size. A value of 0 sets an unlimited size. Default is 200MB. The actual block size might exceed this limit since blooms will be added to blocks until the block exceeds the maximum block size. (default 200MB)
-bloom-compactor.max-bloom-size value
Experimental. The maximum bloom size per log stream. A log stream whose generated bloom filter exceeds this size will be discarded. A value of 0 sets an unlimited size. Default is 128MB. (default 128MB)
Should I set those values to enforce the limit ?
Same issue
To provide more context, we observe that a lot of the bloom blocks of our big tenants are larger than the max-size (which is by default, 200MB). Though most of the blooms are built and uploaded anyways since they are smaller than 5GB, but a few are larger and it stops the whole compacting process.
Describe the bug A clear and concise description of what the bug is.
Loki bloom compactor failed due to EntityTooLarge.
The error message looks like.
I am using S3 as storage backend, when I use following compactor configuration, the compactor failed and did not recover.
After inspect the files on the compactor. The bloom is about 600+M.
To Reproduce Steps to reproduce the behavior:
{} term
Expected behavior A clear and concise description of what you expected to happen.
Compactor can upload large file to s3 backend.
Environment:
Loke 3.1 EKS v1.27
Screenshots, Promtail config, or terminal output If applicable, add any output to help explain your problem.