docker-library / cassandra

Docker Official Image packaging for Cassandra
Apache License 2.0
262 stars 282 forks source link

Cassandra doesn't run with a low memory on docker #277

Closed rodrigorodrigues closed 2 months ago

rodrigorodrigues commented 6 months ago

Hi all,

I'm trying to run Cassandra with the minimum memory as possible on docker but the server kills after a while, is there anyway to use Cassandra with a minimum memory possible? It could disable most all features like authorization, cluster just need a simple single node to store data in a fast way.

cassandra:
  image: 'cassandra:latest'
  environment:
    - 'HEAP_NEWSIZE=10M'
    - 'MAX_HEAP_SIZE=200M'
  ports:
    - '9042:9042'
  deploy:
    resources:
      limits:
        cpus: "0.5"
        memory: "300MB"
tianon commented 6 months ago

Hmm, I'm not sure. I've never had much luck with limiting Cassandra explicitly -- using --env MAX_HEAP_SIZE='128m' --env HEAP_NEWSIZE='32m' is the best I've found to keep the memory usage low (which is very similar to what you're using). If you get rid of the explicit limit (or raise it), does that help?

(This might be better suited to a Java or Cassandra specific forum, since it's not really specific to the container image, and then you might find folks with more knowledge of Java and Cassandra memory usage and keeping it within sane thresholds. :sweat_smile:)

LaurentGoderre commented 4 months ago

I haven't tried yet but according to the docs, that isn't how you set those settings.

https://cassandra.apache.org/doc/latest/cassandra/getting-started/configuring.html#environment-variables

LaurentGoderre commented 4 months ago

This is the smallest I have managed to get it:

services:
  cassandra:
    image: 'cassandra:latest'
    environment:
      JVM_OPTS: -Xmn64m -Xms128m -Xmx500m
    ports:
      - '9042:9042'
    deploy:
      resources:
        limits:
          cpus: "0.5"
          memory: "500MB"
tianon commented 4 months ago

I think if you run it hard enough it'll balloon higher than 500M though, so your limit will likely activate OOM killing. I'm not familiar enough with memory management in Java to say for sure exactly how much higher it'll go, but I've definitely seen it go higher.

LaurentGoderre commented 4 months ago

I wouldn't be surprised. I think the lesson here is that 300mb is just too low for latest cassandra to run.

carlosucros commented 3 months ago

Hmm, I'm not sure. I've never had much luck with limiting Cassandra explicitly -- using --env MAX_HEAP_SIZE='128m' --env HEAP_NEWSIZE='32m' is the best I've found to keep the memory usage low (which is very similar to what you're using). If you get rid of the explicit limit (or raise it), does that help?

(This might be better suited to a Java or Cassandra specific forum, since it's not really specific to the container image, and then you might find folks with more knowledge of Java and Cassandra memory usage and keeping it within sane thresholds. πŸ˜…)

The Cassandra docker container did not start, I added those values ​​to the environment variables when creating the container, now it starts, I don't know why that happens to me now, before it started without having to add that.

thanks, it worked for me :D

tianon commented 3 months ago

By default, if you do not specify a limit, Cassandra queries the host to use some (very large) percentage of all available resources. The values I provided are intentionally pretty unreasonably low, and I would not recommend them for a real deployment (you'll probably want/need higher values there).

yosifkit commented 2 months ago

Closing since this is the nature of cassandra (and java-based applications) and there is a sufficient workaround: set hard limits (e.g. --memory or memory:) higher than the flags given to the JVM (e.g. -Xmx) values to prevent OOM kills.