Open swallez opened 1 year ago
for the 429 handling, we can argue that this belongs to the transport layer (low level rest client), that already retries on all cluster nodes in case of failure and should also handle 429 responses.
I would just like to point out this issue where adding such behavior to the low level rest client was discussed: https://github.com/elastic/elasticsearch/issues/21141#issuecomment-256941807
I also think that is more of a feature for a high level client, rather than a low level one.
That is to say, please get some consensus on where this would belong, because right now it seems like every application developer has to roll their own solution.
@swallez Do you by chance have any update to this ticket? It's a year old, and retry policies would be a good idea. This work seems to be stalled.
Thanks for looking!
Hi @swallez, hope this message finds you well. did you get opportunity to look on this ticket? I am working on update from 7.17.X to the new elasticsearch-java 8.X. Retry policy is part of our application flow.
best regards
hi @swallez ,
I totally agree the BulkProcessor in HLRC was retrying even in cases this should not have done. On the other side, I think that retrying only on 429s can be not enough.
What would happen in cases when there is a temporary network issue? I guess the low level client would close the connection. Is there any way to configure the low level client to support custom policies in terms of retries?
cc @l-trotta
The
BulkProcessor
in the High Level Rest Client (HLRC) has two kinds of retries:The new
BulkIngester
added in #474 doesn't retry for now:for individual item retries, the approach used in the
BulkProcessor
to retry all failed items has some shortcomings: a number of errors will result in the same error when retried: e.g. version verification failure, partial update failure because of script error or bad document structure, deletion of a non-existing document, etc.The items worth retrying are probably those with a 429 status, which may happen if the coordinating node accepted the request but the target node for the item's operation was overloaded.
A way to handle this in the new
BulkIngester
would be to define a retry policy by means of delay behavior (linear, exponential, etc) like in HLRC and also a predicate to select the failed items that need to be retried.