Closed cbeaujoin-stellar closed 5 months ago
We could make it configurable more easily.
The higher the batch_size
is the more likely will be transmission errors, like timeouts or maybe processing errors on the Logstash side.
While the Beats protocol is designed to handle such problems, I'd still keep the default conservatively low.
What about:
batch_size = max(50, constants.QUEUED_EVENTS_BATCH_SIZE)
?Hi, Yes it sounds a good trade off.
@cbeaujoin-stellar I implemented the new setting QUEUED_EVENTS_BEATS_BATCH_SIZE
with a simple default. I think this is OK as the batch size is configurable and users can set it as they like.
I think the batch_size should not be equal to 10 but should be equal to constants.QUEUED_EVENTS_BATCH_SIZE.
I.E if constants.QUEUED_EVENTS_BATCH_SIZE = 1500, it will generate 1500/10=150 transactions and that will take lot of time.