Closed amit-k-yadav closed 3 years ago
Hi @amit-k-yadav, I'm afraid that what you are trying to achieve is not supported by this Helm chart. MariaDB Galera needs to reach as many peers as possible, if not, you will be loosing the high availability capabilities. Note that the headless service is binded to all the galera peers while your external service is going to be binded to only one of them.
Thank you @andresbono for your reply. I understand what you are saying and that is the reason why I have reduced the replicaCount
to 1
for now. This way, there is only one peer for now. This worked perfectly fine with Headless service.
I am planning to have 1 LoadBalancer service per pod (i.e peer) and mention all the services in MARIADB_GALERA_CLUSTER_ADDRESS (i.e. gcomm://...
) . This however, will be the next step, for now it should work right since there is only one pod and the service is dedicated to that pod particularly.
I am not sure why Headless service works and LoadBalancer failes for one peer.
After some investigation, I couldn't make it work yet. It seems the problem is related to wsrep_node_address
. This value should be the same than one of the used in the wsrep_cluster_address
.
The container guesses the value for wsrep_node_address
by executing hostname -i
, which returns the pod IP. link.
When a headless service is used with selectors, it points directly to the pod IP, so wsrep_node_address
and wsrep_cluster_address
are compatible but that's not the case when you define the service type as LoadBalancer...
To workaround this issue I tried to use the MARIADB_GALERA_NODE_ADDRESS
env-var. You will need to modify the statefulset.yaml
file in order to add it (or alternatively, you can use extraEnvVars
):
- name: MARIADB_GALERA_CLUSTER_ADDRESS
value: "gcomm://{{ template "common.names.fullname" . }}-external.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}"
- name: MARIADB_GALERA_NODE_ADDRESS
value: "{{ template "common.names.fullname" . }}-external.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}"
But even after setting it, the error is still the same. I'm sharing this information with you so that you can continue debugging it on your side.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
I'm also interested in this issue. Specifically cross-regional setup across multiple clusters.
Hi @leokhachatorians , As @andresbono said it seems the target configuration of this issue is not supported by the helm Chart at this moment https://github.com/bitnami/charts/issues/4887#issuecomment-754590126. You could try a configuration like the one explained here https://github.com/bitnami/charts/issues/4887#issuecomment-754605180 but it seems he is also having issue with that
@miguelaeh Ah understood. I ended up more or less doing a bit of a hack to the libmariadbgalera.sh
to adjust the manner in which it would determine when to bootstrap the cluster; if anyone ends up going down a similar path.
thank you for sharing it @leokhachatorians !
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Which chart:
Bug description & steps to re-produce
I am trying to deploy a Galera cluster on multiple Kubernetes clusters (one Galera node per Kubernetes cluster) across multiple GCP regions for high availability during regional failures in Google Cloud Platform.
I have set the values of
replicaCount
(here) to1
in order to have just one node to start with.Now since headless services are not reachable outside Kubernetes clusters, I have added an additional service specifically for this only running pod. Below is the service definition:
I took the already existing headless service and created new file (external-galera-svc.yaml) with the above content. When I install this chart.
I have also changed the statefulset.yaml file here to use the LoadBalancer (external service) instead of the headless service for MARIADB_GALERA_CLUSTER_ADDRESS. Below are the changes:
I am getting the below error:
Expected behavior
replicaCount: 1
and it works fine, it should work with LoadBalancer service too.Version of Helm and Kubernetes:
Output of
helm version
:Output of
kubectl version
: