It is possible for larger databases for BatchLogEntries to be large. This will consume a lot of memory and network bandwidth without the ability to apply back pressure (as you would get with discrete smaller messages). We should consider moving the client to either a stream based api or a paginated BatchLogEntries.
For now the temporary fix is to remove the max decoding message size on the client side.
It is possible for larger databases for
BatchLogEntries
to be large. This will consume a lot of memory and network bandwidth without the ability to apply back pressure (as you would get with discrete smaller messages). We should consider moving the client to either a stream based api or a paginatedBatchLogEntries
.For now the temporary fix is to remove the max decoding message size on the client side.