Closed rodrigorodrigues closed 2 months ago
Hmm, I'm not sure. I've never had much luck with limiting Cassandra explicitly -- using --env MAX_HEAP_SIZE='128m' --env HEAP_NEWSIZE='32m'
is the best I've found to keep the memory usage low (which is very similar to what you're using). If you get rid of the explicit limit (or raise it), does that help?
(This might be better suited to a Java or Cassandra specific forum, since it's not really specific to the container image, and then you might find folks with more knowledge of Java and Cassandra memory usage and keeping it within sane thresholds. :sweat_smile:)
I haven't tried yet but according to the docs, that isn't how you set those settings.
This is the smallest I have managed to get it:
services:
cassandra:
image: 'cassandra:latest'
environment:
JVM_OPTS: -Xmn64m -Xms128m -Xmx500m
ports:
- '9042:9042'
deploy:
resources:
limits:
cpus: "0.5"
memory: "500MB"
I think if you run it hard enough it'll balloon higher than 500M though, so your limit will likely activate OOM killing. I'm not familiar enough with memory management in Java to say for sure exactly how much higher it'll go, but I've definitely seen it go higher.
I wouldn't be surprised. I think the lesson here is that 300mb is just too low for latest cassandra to run.
Hmm, I'm not sure. I've never had much luck with limiting Cassandra explicitly -- using
--env MAX_HEAP_SIZE='128m' --env HEAP_NEWSIZE='32m'
is the best I've found to keep the memory usage low (which is very similar to what you're using). If you get rid of the explicit limit (or raise it), does that help?(This might be better suited to a Java or Cassandra specific forum, since it's not really specific to the container image, and then you might find folks with more knowledge of Java and Cassandra memory usage and keeping it within sane thresholds. π )
The Cassandra docker container did not start, I added those values ββto the environment variables when creating the container, now it starts, I don't know why that happens to me now, before it started without having to add that.
thanks, it worked for me :D
By default, if you do not specify a limit, Cassandra queries the host to use some (very large) percentage of all available resources. The values I provided are intentionally pretty unreasonably low, and I would not recommend them for a real deployment (you'll probably want/need higher values there).
Closing since this is the nature of cassandra
(and java-based applications) and there is a sufficient workaround: set hard limits (e.g. --memory
or memory:
) higher than the flags given to the JVM (e.g. -Xmx
) values to prevent OOM kills.
Hi all,
I'm trying to run Cassandra with the minimum memory as possible on docker but the server kills after a while, is there anyway to use Cassandra with a minimum memory possible? It could disable most all features like authorization, cluster just need a simple single node to store data in a fast way.