innodb_buffer_pool_size and group_replication_cache_size are tuned proportionally to the available memory on the pod container, allowing enough memory for required connections (each reserving 12MB)
Actual behavior
If the total available memory is 2G
innodb_buffer_pool_size is tuned to 536870912 (roughly 500MB)
group_replication_message_cache_size is set to 1073741824 (default of 1G)
relevant code
available_memory = 2Gi (2147483648)
pool_size = (0.75 2Gi) - (1Gi) = 536870912
available_memory - pool_size = 1610612736 (which is > 1Gi)
with the above allocated memory, we set innodb_buffer_pool_size to 500MB and group_replication_message_cache_size to 1G
with this, the container would get OOM killed if the message cache is full and there are more than 43 connections (12MB 42.66667 = 500MB)
if a unit in the cluster is down, as the group_replication_message_cache_size grows, the connection capacity diminishes
Steps to reproduce
Expected behavior
innodb_buffer_pool_size
andgroup_replication_cache_size
are tuned proportionally to the available memory on the pod container, allowing enough memory for required connections (each reserving 12MB)Actual behavior
If the total available memory is 2G
innodb_buffer_pool_size
is tuned to536870912
(roughly 500MB)group_replication_message_cache_size
is set to1073741824
(default of 1G)relevant code available_memory = 2Gi (2147483648) pool_size = (0.75 2Gi) - (1Gi) = 536870912 available_memory - pool_size = 1610612736 (which is > 1Gi) with the above allocated memory, we set
innodb_buffer_pool_size
to 500MB andgroup_replication_message_cache_size
to 1G with this, the container would get OOM killed if the message cache is full and there are more than 43 connections (12MB 42.66667 = 500MB) if a unit in the cluster is down, as thegroup_replication_message_cache_size
grows, the connection capacity diminishesVersions
Operating system: Ubuntu 22.04.4 LTS
Juju CLI: 3.5.4
Juju agent: 3.5.4
Charm revision: 205
microk8s: MicroK8s v1.31.1 revision 7234