Closed geomarsi closed 6 years ago
@kubernetes/sig-api-machinery @kubernetes/kind-bug
/sig architecture /sig contributor-experience-test-failures /sig network /sig testing /wg kubeadm-adoption
/remove-wg kubeadm-adoption
Problems solved when switched my POD network from Flanneld to Calico. (tested on Kubernetes 1.11.0; will repeat tests tomorrow on latest k8s version 1.11.2)
Tests successful with k8s versions 1.11.1 and 1.11.2. No more issues with Calico.
Hi,
I have the same problem with 1.11.3 and HA cluster detailed in https://kubernetes.io/docs/setup/independent/high-availability/
I have HAproxy LB.
bind 192.168.1.30:6443
# bind 127.0.0.1:443
mode tcp
option tcplog
default_backend k8s-api-backend
backend k8s-api-backend
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100```
server ht-dkh-01 192.168.1.21:2380 check
server ht-dkh-02 192.168.1.22:2380 check
server ht-dkh-03 192.168.1.23:2380 check
On my 3rd node I get:
[root@ht-dkh-03 ~]# kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
Unable to connect to the server: x509: certificate is valid for ht-dkh-02, localhost, ht-dkh-02, not ht-dklb-01
I am really stuck with setting this up, should I follow a mixture of 1.10 and 1.11 documentation for this setup?
Hello @hextrim Did you first copy the content of /etc/kubernetes/... from Master1 to Master3 before dropping any kubectl and kubeadm commands on Master3? I can't tell what enhancements where made with 1.11.3, but based on my experience with 1.11.2, it failed with an external HAProxy LB and worked with Keepalived set on the Master nodes themselves as per 1.10 documentation.
Hi @geomarsi I managed to setup "Stacked ETCD" behind HAproxy on CentOS7 7.5.1804: without issues using kubeadm, kubectl, kubelet v1.11.3 following the official documentation here: https://kubernetes.io/docs/setup/independent/high-availability/ .
Next step is to setup the cluster with "External ETCD"
Closing per previous comment https://github.com/kubernetes/kubernetes/issues/67389#issuecomment-413300366
Following Kubernetes v1.11 documentation, I have managed to setup Kubernetes high availability using kubeadm, stacked control plane nodes, with 3 masters running on-premises on CentOS7 VMs. But with no load-balancer available, I used Keepalived to set a failover virtual IP (10.171.4.12) for apiserver as described in Kubernetes v1.10 documentation. As a result, my "kubeadm-config.yaml" used to boostrap the control planes had the following header:
The configuration went fine with the following Warning that appeared when boostrapping all 3 Masters:
And this Warning when joining Workers:
Afterwards, basic tests succeed:
But then comes these issues:
I am running Kubernetes v1.11.1 but kubeadm-config.yaml mentions 1.11.0, is this something I should worry about?
Shall I not follow the official documentation and go for other alternatives such as described at: https://medium.com/@bambash/ha-kubernetes-cluster-via-kubeadm-b2133360b198
Important Notes: After running couple of labs, I got the same issue with:
-- Nginx controller pod events & logs:
-- helm command outputs:
-- kubernetes service & endpoints: