Open dblock opened 2 years ago
I had opened something similar a while back https://github.com/elastic/elasticsearch/issues/58404
I will look into this.
As mentioned above, the information provided right now does not specify much as it just tells that the data is large. The approach can be that you use the token bucket algorithm
to calculate the number of times the request trips
in terms of percentage basis
. The doubts I had were:
→ Does this request trip approach mean that the number of segments are more and segment memory is large.
→Here, we will be changing the durability from transient to permanent if the trip count percentage exceeds the limit, in case if it trips but does not exceed then will we just follow the previous approach.
There are two things here and feel free to split this into separate issues
Caused by: CircuitBreakingException[[parent] Data too large, data for [cluster:monitor/nodes/info[n]] would be [2061357276/1.9gb], which is larger than the limit of [2023548518/1.8gb], real usage: [2061355280/1.9gb], new bytes reserved: [1996/1.9kb], usages [request=0/0b, fielddata=0/0b, in_flight_request
s=1996/1.9kb, accounting=194546028/185.5mb]]
Thank you Bukhtawar for specifying this out. As mentioned, this can be a meta issue where it can be broken into two parts of:
Is your feature request related to a problem? Please describe.
A cluster was hitting a circuit breaker.
After doing heap dumps the suspect was
It seems that number of segments was huge, and maintaining metadata for all those segments was consuming too much memory.
Can the error message just tell us that?
Describe the solution you'd like
In this case the error message should say that the number of segments is too large to fit in memory and recommend troubleshooting steps.