Closed gboutry closed 3 months ago
@gboutry Although I've identified at least one race condition that could lead to the error, I could not reproduce locally.
I'm running
juju add-model cos
juju deploy cos-lite --overlay ${HOME}/storage-small-overlay.yaml --trust
juju offer prometheus:metrics-endpoint pm-me
juju add-model squirrel
juju deploy ch:mysql-k8s --channel 8.0/stable --trust -n 3
juju consume admin/cos.pm-me
juju integrate mysql-k8s pm-me
juju model-config logging-config="<root>=WARNING;unit=DEBUG" -m squirrel
juju debug-log
Do you see any fundamental difference from what you are doing?
I don't see any fundamental difference, except that we have grafana-agent-k8s in the middle, but that should not have an effect (I hope).
I did a deployment from 8.0/edge, related all the endpoints and did not encounter that issue.
Closing this as unable to reproduce on current revisions.
Steps to reproduce
Expected behavior
Does not error out
Actual behavior
Failure during relation-join:
charms.mysql.v0.mysql.MySQLExecError: error: cannot stat /etc/logrotate.d/flush_mysql_logs: No such file or directory
Versions
Operating system: Ubuntu 22.04.4 LTS
Juju CLI: 3.2.4-genericlinux-amd64
Juju agent: 3.2.4
Charm revision: 113
microk8s: MicroK8s v1.28.7 revision 6532
Log output
Juju debug log: https://pastebin.canonical.com/p/rfP7thNKVN/
Additional context
Got the charm going further by killing the pod Happened to 3 out of my 9 MySQL deployments (yes, MySQL is deployed as 9 different applications inside the same model) Each MySQL is deployed with 3 units, it's the unit receiving the event first that failed each time.