Closed N0NameN0 closed 4 years ago
That's not too much data, however with huge datasets Kibana can be picky. Have you tried upgrading to the latest 20.06.1 (/opt/tpot/update.sh
)?
same after update :
any hint to look at ?
here is what i can read on the kibana container output :
doc['task.retryAt'].value.toInstant().toEpochMilli();\\\\n}\\\\nif (doc['task.runAt'].size()!=0) {\\\\n return doc['task.runAt'].value.toInstant().toEpochMilli();\\\\n}\\\\n \\\"}}}],\\\"seq_no_primary_term\\\":true,\\\"script\\\":{\\\"source\\\":\\\"ctx._source.task.ownerId=params.ownerId; ctx._source.task.status=params.status; ctx._source.task.retryAt=params.retryAt;\\\",\\\"lang\\\":\\\"painless\\\",\\\"params\\\":{\\\"ownerId\\\":\\\"kibana:1a5fee71-bce4-4fac-a9a8-4aae02fe3dd3\\\",\\\"status\\\":\\\"claiming\\\",\\\"retryAt\\\":\\\"2020-09-15T11:53:02.830Z\\\"}}}\",\"statusCode\":429,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"circuit_breaking_exception\\\",\\\"reason\\\":\\\"[parent] Data too large, data for [<http_request>] would be [2063628510/1.9gb], which is larger than the limit of [1972122419/1.8gb], real usage: [2063621320/1.9gb], new bytes reserved: [7190/7kb], usages [request=480/480b, fielddata=5243870/5mb, in_flight_requests=7190/7kb, accounting=121866396/116.2mb]\\\"
ES seems to need more RAM. You can change Xms/Xmx lines to 4096 in /opt/tpot/etc/tpot.yml
.
All seems ok now ;) thx ;)
Welcome
since some days it seems i have too much data to parse... ( but i have logstash only since 7/7/2020 )
when i try the last two months :
here is an example of error
but it use only a little part of my ram....
do i have to increase a value somewhere ? ;)