Open borah-hemanga opened 2 years ago
You will need to change kubernetes.cluster-id
config every time you want to deploy a flink app (increment it or take current timestamp) on any FlinkApplication config change. That way when operator starts upgrading and new cluster starts up, it wont try to behave as failover of existing cluster you are running.
I think for operator to support scenario of same kubernetes.cluster-id
would need to first shutdown the job that is already running and stop the cluster. And then start the new cluster and deploy the app. Currently its trying to minimize the downtime with having both clusters running during upgrade. Would be nice to have that mode too
@nikolasten Is it only kubernetes.cluster-id
?
This is config option for zookeeper only, and not for kuberenetes ha. We did this in our fork to enable it and to make sure its different every time we deploy or upgrade the app. https://github.com/bluelabs-eu/flinkk8soperator/commit/fa64278343aab41a6815343665a342944ccc9510#diff-0e21f32f488d8c4a8aeb58de476274825e4004216515b5bcbcbe0045efe08b00R215-R218
This pr here https://github.com/lyft/flinkk8soperator/pull/170 address the changing of cluster id every time. But it does not add config option for kuberenetes based ha mode.
I tried out the native HA on Kubernetes using the operator.
Here is the general synopsis:
The deployment (update) of an existing application goes through the following:
Has anyone been successful in using Native K8s HA with Flink with this FlinkK8sOperator?