Closed chicco785 closed 4 months ago
@chicco785 there is not a specific reason. Feel free to prepare a PR to reduce the value. I'd say that we can drop down to 10 ms . Thank you.
btw there is always the BatchSend
to give you more control, see: https://github.com/rabbitmq/rabbitmq-stream-go-client?tab=readme-ov-file#send-vs-batchsend
@hiimjako I don't recall if there is a reason why we picked send over batchsend
if you work with @hiimjako I know why, because HA part does not support (yet) the bach send, but only send. :)
https://github.com/rabbitmq/rabbitmq-stream-go-client/blob/main/pkg/ha/ha_publisher.go#L112
if you work with @hiimjako I know why, because HA part does not support (yet) the bach send, but only send. :)
https://github.com/rabbitmq/rabbitmq-stream-go-client/blob/main/pkg/ha/ha_publisher.go#L112
Exactly for this. I think it would be great to have both:
For the first point, why not just do the check that the minBatchPublishingDelay
is not less than 1 ms?
It will obviously stress the client with lower values, but everyone will have the freedom to set it to the value that works best for them. What do you think?
Reduced trashold, since the Send function has all the logic needed for batch handling. Expose the BatchSend to give more control to those who want to implement different batch handling.
I don't have space now but feel free to propose a PR.
For the first point, why not just do the check that the minBatchPublishingDelay is not less than 1 ms?
Ok.
I don't have space now but feel free to propose a PR.
I will do it these days :+1:
fixed by #333
Is your feature request related to a problem? Please describe.
https://github.com/rabbitmq/rabbitmq-stream-go-client/blob/baed3d514424e0924142231059167ad1c1598f5b/pkg/stream/constants.go#L107
in our use case the application work at 20msec frequency, so the idea batch delay should be around that. we can play with batch size, but when some sensors are unreliable, this pops up again, and we need to reconfigure the batch size. setting to 20msec or (16 in the us market), would decrease a lot the lanency for real time processing.
Describe the solution you'd like
if possible, set a batch delay below 50msec.
Describe alternatives you've considered
No response
Additional context
No response