pires / kubernetes-elasticsearch-cluster

Elasticsearch cluster on top of Kubernetes made easy.
Apache License 2.0
1.51k stars 690 forks source link

es-data nodes exceeding Xmx memory #233

Open fortuneFelix opened 5 years ago

fortuneFelix commented 5 years ago

Hello, We are running version 5.6.0 in our K8 1.10.3 cluster with docker version 1.13.1. The es-data nodes, constantly exceed the configured the Xmx value. Settings: ES_JAVA_OPTS=-Xms2048m -Xmx2048m Kubernetes memory limit: 4096M

Full JVM arguments: -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms2048m, -Xmx2048m, -Des.path.home=/elasticsearch

The es-data nodes grow beyond 4GB and then are getting killed.

The container we run is: quay.io/pires/docker-elasticsearch-kubernetes:5.6.0

Any idea why this is happening? Any advice on how to improve this so es-data nodes stay within their limits?

Thanks a ton for any advice/help!

bw2 commented 5 years ago

I've run a stable cluster with -Xms3900m -Xmx3900m or more. I think memory usage will depend on what you're doing in terms of data size, indexing, and queries. (https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html) Why do you expect 2048m to be sufficient?

msmaverick2018 commented 5 years ago

Are you running into issue where JVM does not run under cgroup limitations set by the container and looks at host memory for limitations. Check the link below: https://blog.csanchez.org/2017/05/31/running-a-jvm-in-a-container-without-getting-killed/

-XX:+UnlockExperimentalVMOptions \ -XX:+UseCGroupMemoryLimitForHeap \ -XshowSettings:vm -version