The heap size in the Cassandra container does not currently scale in response to changes to the memory limits set by the container runtime.
This means that users who set a memory limit may end up with a heap which is greater than 50% of their total memory available, which will cause OOM killer to kill processes in the management API container.
We can use the settings XX:MaxRAMFraction, -XX:MinRAMFraction for Java 8 131 - 190. More recent versions of Java require the use of -XX:MaxRAMPercentage, -XX: MinRAMPercentage.
When implementing, we should ensure that users who are setting regular max_heap_size/max_heap_size can still set these in a fixed fashion. This ticket is just to implement a default which is sensible for users who are defining resource limits via CRI mechanisms.
┆Issue is synchronized with this Jira Story by Unito
┆Issue Number: CASS-45
What is missing?
The heap size in the Cassandra container does not currently scale in response to changes to the memory limits set by the container runtime.
This means that users who set a memory limit may end up with a heap which is greater than 50% of their total memory available, which will cause OOM killer to kill processes in the management API container.
We can use the settings
XX:MaxRAMFraction
,-XX:MinRAMFraction
for Java 8 131 - 190. More recent versions of Java require the use of-XX:MaxRAMPercentage
,-XX: MinRAMPercentage
.When implementing, we should ensure that users who are setting regular
max_heap_size
/max_heap_size
can still set these in a fixed fashion. This ticket is just to implement a default which is sensible for users who are defining resource limits via CRI mechanisms.┆Issue is synchronized with this Jira Story by Unito ┆Issue Number: CASS-45