Open ynnt opened 3 years ago
Same problem
Yes and logs in orchestrator show - Unable to determine cluster name
This is for a brand new cluster
deployed the cluster in a different namespace and also tried in same namespace as operator, same result. Also tried changing the name of the cluster but has same issues as above.
I have a similar issue: the cluster starts, but after some time the mysql becomes non-ready and I get the above log message in the operator logs.
I have this problem too
Same problem
Please make sure you are not hitting #170. (see https://www.bitpoke.io/docs/mysql-operator/deploy-mysql-cluster/#note-1).
Also please try with v0.5.0.
Hello. In my case:
Obviously, I figured the problem was mysql-operator - no changes helped at all. Everything worked, but MySQL clusters gradually stopped working. Horror...
An error occurred - not deleted pod with MySQL cluster. The same error was at the very beginning when I tried to fix the cluster K8S. I ignored her then. This happened at the stage "Drain node"
fatal: [node1]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["/usr/local/bin/kubectl", "--kubeconfig", "/etc/kubernetes/admin.conf", "drain", "--force", "--ignore-daemonsets", "--grace-period", "300", "--timeout", "360s", "--delete-emptydir-data", "node1"], "delta": "0:06:01.760844", "end": "2022-10-05 02:44:14.018346", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2022-10-05 02:38:12.257502", "stderr": "WARNING: ignoring DaemonSet-managed Pods: default/netchecker-agent-hostnet-xvkjz, default/netchecker-agent-w282k, *** \nerror when evicting pods/\"***-mysql-0\" -n \"***\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.\nerror when evicting pods/\"***-mysql-0\" -n \"***\"
Kubespray unable to upgrade the cluster completely - in my case that was the reason.
hello
same problem here. 77 clusters deployed without problem but one of them does not want to deploy the second node because "cluster not found in Orchestrator". No other error at all
hello, I found that some data are still in the sqlite db after days of cluster deletion. in database_instance_last_analysis , database_instance_tls, kv_store, hostname_ips.
thx for your help, I'm really stuck here :(
Cluster is stuck in Ready: False phase because mysql pod never gets Ready.