Open jackin853 opened 2 years ago
This seems to be an accidental phenomenon. I have tried the same operation several times, but I cannot reproduce the problem.
Hey @jackin853
Sounds like an odd issue. How often are you experiencing this? Looking at some of the past issue we've come across, it seems like this is a lot similar to the behavior seen in #7750. The fix for that issue is outlined in #2868 and has some workarounds in the comments that may work for you 🤞
Let me know if that sounds like what you're experiencing and if the workaround ( using bootstrap_expect=3
) works for you
@Amier3 Thanks,I will look forward to #2868 Here, I have another question, I want to know how to configure the log output of consul, through the above statefuset deployment method, I can't find the relevant log output directory, I want to redirect it to a local file, because Currently we have not implemented a dynamic pod log phone based on EFK
After the three nodes are restarted, consul cannot provide services, and each consul is caught in an endless election cycle In the kubernetes environment, using statefulset to deploy three instances of consul, everything runs normally, until after the three nodes are restarted, consul cannot provide services, and each consul is caught in an endless election cycle.Below is my statefulset configuration:
apiVersion: apps/v1 kind: StatefulSet metadata: name: test-consul-statefulset namespace: test labels: app: test-consul-statefulset component: test-consul-server spec: serviceName: test-consul-headless replicas: 3 selector: matchLabels: app: test-consul-statefulset component: test-consul-server template: metadata: labels: app: test-consul-statefulset component: test-consul-server spec: serviceAccountName: test-consul-service-account nodeSelector: test-label: test-label affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:
By entering each container and using "consul members", I can only see itself, and the log of each pod is the following output (sorry the original log is no longer there, the server has been emptied):
I'm wondering why each consul pod is caught in an endless loop of elections, and retry-join has been set up, why can't they see each other? I delete the corresponding statefulset through “kubectl delete sts”, the data is still retained, and then execute “kubectl create -f statefulset.yaml”, consul runs normally again, and the leader is successfully elected. I am now wondering if there is something wrong with my configuration, or something else? Hope to get help.