canonical / mysql-k8s-operator

A Charmed Operator for running MySQL on Kubernetes
https://charmhub.io/mysql-k8s
Apache License 2.0
8 stars 16 forks source link

Charm not tuning group_replication_message_cache_size ideally if total allocated memory is small #521

Open shayancanonical opened 1 month ago

shayancanonical commented 1 month ago

Steps to reproduce

  1. juju deploy -n 1 mysql-k8s --channel 8.0/edge --constraints "mem=2G"

Expected behavior

innodb_buffer_pool_size and group_replication_cache_size are tuned proportionally to the available memory on the pod container, allowing enough memory for required connections (each reserving 12MB)

Actual behavior

If the total available memory is 2G innodb_buffer_pool_size is tuned to 536870912 (roughly 500MB) group_replication_message_cache_size is set to 1073741824 (default of 1G)

relevant code available_memory = 2Gi (2147483648) pool_size = (0.75 2Gi) - (1Gi) = 536870912 available_memory - pool_size = 1610612736 (which is > 1Gi) with the above allocated memory, we set innodb_buffer_pool_size to 500MB and group_replication_message_cache_size to 1G with this, the container would get OOM killed if the message cache is full and there are more than 43 connections (12MB 42.66667 = 500MB) if a unit in the cluster is down, as the group_replication_message_cache_size grows, the connection capacity diminishes

Versions

Operating system: Ubuntu 22.04.4 LTS

Juju CLI: 3.5.4

Juju agent: 3.5.4

Charm revision: 205

microk8s: MicroK8s v1.31.1 revision 7234

syncronize-issues-to-jira[bot] commented 1 month ago

Thank you for reporting us your feedback!

The internal ticket has been created: https://warthogs.atlassian.net/browse/DPE-5654.

This message was autogenerated