Closed shayancanonical closed 2 months ago
Q: who will update K8s selector if user deploy old revision and refresh to the new charm revision? ;-)
Very wishful thinking, but the reporter for issue 459 confirmed that there are multiple deployed MySQL applications when sunbeam is deployed. My thoughts were that it may be possible that there may be two applications with the same generated cluster-name
s (confirmed with Guillaume from sunbeam that cluster-name
is not explicitly configured). If this is the case, traffic for one cluster's primary may be redirected to another cluster's primary. However, I now realize that issue 459 happens when flushing logs - more investigation required
On update-status
, if the cluster is healthy, we update-endpoints
which results in the correct patch of pod labels. When a charm is upgraded, the next update status should correct k8s selector. Furthermore, upon complete-upgrade
(or rather pebble-ready
after restart of pod) of mysql-k8s/1
, we call set_cluster_primary
to set unit 1 as the primary before upgrading unit 0. Setting this primary will update the pod labels correctly, so the k8s selector will work right after upgrading the charm to the new revision
Issue
When 2 mysql applications exist in the same model with the same cluster-name, it is possible that traffic for one cluster gets routed to the other cluster
Example: https://github.com/canonical/mysql-k8s-operator/actions/runs/10730746555/job/29759894942#step:37:1051 Possibly related: https://github.com/canonical/mysql-k8s-operator/issues/459
This is because our selector for service endpoint is
cluster-name
androle
Solution
Add
application-name
as a label to pods, and use this as a selector for the service endpoint as well