Open rodrigc opened 1 year ago
Hi! We just realized that we haven't looked into this issue in a while. We're sorry!
We're labeling this issue as Stale
to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1
.
Thank you for your contribution!
Describe the enhancement:
In this discussion thread: https://discuss.elastic.co/t/fix-to-libbeats-to-split-bulk-requests-which-are-too-large-not-working-in-elastic-agent-8-9-0/342136
I raised the point that the GET _cluster/settings API returns
http.max_content_length
.My request is that beats and elastic-agent query
http.max_content_length
from Elasticsearch, and use this value as part of setting the size of the payload inPOST _bulk
calls to the server.A lot of work as gone into PR 34911: Split large batches on error instead of dropping them, but I think querying
http.max_content_length
from the server could enhance this logic and make it more robust, for very large payloadsEnhancement Request: 19624