kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
16.17k stars 6.48k forks source link

Failed to generate certificates in k8s-master[1] node #4639

Closed jiangytcn closed 5 years ago

jiangytcn commented 5 years ago

Environment:

Linux 4.19.23-041923-generic x86_64 NAME="Ubuntu" VERSION="18.04.2 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.2 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic

Kubespray version (commit) (git rev-parse --short HEAD): 09fe95bc

Network plugin used: calico

Copy of your inventory file:

{ "os_flavor=4C8G50G": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2" ] }, "all": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2" ] }, "os_metadata_kubespray_groups=bastion": { "hosts": [ "management-bastion-1" ] }, "kube-master": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2" ] }, "os_metadata_kubespray_groups=kube-node,k8s-cluster,": { "hosts": [ "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2" ] }, "os_metadata_kubespray_groups=etcd,kube-master,,k8s-cluster,vault": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2" ] }, "os_metadata_ssh_user=ubuntu": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2", "management-bastion-1" ] }, "os_flavor=2C2G50G": { "hosts": [ "management-bastion-1" ] }, "publicly_routable": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2", "management-bastion-1" ] }, "os_image=ubuntu-16.04": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2", "management-bastion-1" ] }, "osmetadata%=3": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2", "management-bastion-1" ] }, "k8s-cluster": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2" ] }, "bastion": { "hosts": [ "management-bastion-1" ] }, "os_metadata_kubespray_groups=etcd,vault,no-floating": { "hosts": [ "management-etcd-3", "management-etcd-2", "management-etcd-1" ] }, "etcd": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1" ] }, "kube-node": { "hosts": [ "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2" ] }, "no-floating": { "hosts": [ "management-etcd-3", "management-etcd-2", "management-etcd-1" ] }, "os_region=RegionOne": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2", "management-bastion-1" ] }, "vault": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1" ] }, "role=none": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2", "management-bastion-1" ] }, "_meta": { "hostvars": { "management-k8s-master-1": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "192.168.2.127", "public_ipv4": "192.168.2.127", "ip": "10.0.0.15", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "4C8G50G", "id": "01745f89-22b9-466b-aaff-ca4f03daf5d8" }, "ansible_ssh_port": 22, "id": "ddcc06ec-34e8-4326-9efc-bc129baaa21e", "security_groups": [ "management-k8s", "management-k8s-master" ], "publicly_routable": true, "access_ip": "192.168.2.127", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.15", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:fd:0a:db", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "192.168.2.127", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.15", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "etcd,kube-master,,k8s-cluster,vault", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } }, "management-etcd-3": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "10.0.0.14", "public_ipv4": "10.0.0.14", "ip": "10.0.0.14", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "2C1G10G", "id": "e6caf3f4-7f52-4815-a58d-2d6723ade339" }, "ansible_ssh_port": 22, "id": "44ab11e4-de7d-4719-938c-8228bd893841", "security_groups": [ "management-k8s" ], "publicly_routable": true, "access_ip": "10.0.0.14", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.14", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:92:ab:af", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "10.0.0.14", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.14", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "etcd,vault,no-floating", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } }, "management-k8s-master-2": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "192.168.2.106", "public_ipv4": "192.168.2.106", "ip": "10.0.0.7", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "4C8G50G", "id": "01745f89-22b9-466b-aaff-ca4f03daf5d8" }, "ansible_ssh_port": 22, "id": "41d7f77e-a511-4806-9ec7-a2e0d33666f4", "security_groups": [ "management-k8s", "management-k8s-master" ], "publicly_routable": true, "access_ip": "192.168.2.106", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.7", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:5b:72:d0", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "192.168.2.106", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.7", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "etcd,kube-master,,k8s-cluster,vault", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } }, "management-etcd-2": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "10.0.0.5", "public_ipv4": "10.0.0.5", "ip": "10.0.0.5", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "2C1G10G", "id": "e6caf3f4-7f52-4815-a58d-2d6723ade339" }, "ansible_ssh_port": 22, "id": "4d43565b-b4c4-4a55-9c0f-43eb17fd080c", "security_groups": [ "management-k8s" ], "publicly_routable": true, "access_ip": "10.0.0.5", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.5", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:5f:da:08", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "10.0.0.5", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.5", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "etcd,vault,no-floating", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } }, "management-bastion-1": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "192.168.2.115", "public_ipv4": "192.168.2.115", "ip": "10.0.0.12", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "2C2G50G", "id": "62174d2c-a445-4cac-a037-7f52ba8e5da4" }, "ansible_ssh_port": 22, "id": "eba7542b-6c5e-4879-b05f-1bd79bf6cf05", "security_groups": [ "management-k8s", "management-bastion" ], "publicly_routable": true, "access_ip": "192.168.2.115", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.12", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:63:3d:9d", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "192.168.2.115", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.12", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "bastion", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } }, "management-k8s-node-1": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "192.168.2.114", "public_ipv4": "192.168.2.114", "ip": "10.0.0.13", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "4C8G50G", "id": "01745f89-22b9-466b-aaff-ca4f03daf5d8" }, "ansible_ssh_port": 22, "id": "510c1ffe-8a43-446c-a331-30b060a0e599", "security_groups": [ "management-k8s", "management-k8s-worker" ], "publicly_routable": true, "access_ip": "192.168.2.114", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.13", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:8b:f1:82", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "192.168.2.114", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.13", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "kube-node,k8s-cluster,", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } }, "management-k8s-node-3": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "192.168.2.105", "public_ipv4": "192.168.2.105", "ip": "10.0.0.3", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "4C8G50G", "id": "01745f89-22b9-466b-aaff-ca4f03daf5d8" }, "ansible_ssh_port": 22, "id": "a8182cc6-a6ec-4d24-b07c-c1fdda4830c6", "security_groups": [ "management-k8s", "management-k8s-worker" ], "publicly_routable": true, "access_ip": "192.168.2.105", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.3", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:87:26:32", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "192.168.2.105", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.3", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "kube-node,k8s-cluster,", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } }, "management-etcd-1": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "10.0.0.4", "public_ipv4": "10.0.0.4", "ip": "10.0.0.4", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "2C1G10G", "id": "e6caf3f4-7f52-4815-a58d-2d6723ade339" }, "ansible_ssh_port": 22, "id": "76bd3a7b-e5cb-40c7-b47b-3dd9773f4dce", "security_groups": [ "management-k8s" ], "publicly_routable": true, "access_ip": "10.0.0.4", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.4", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:c3:2f:29", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "10.0.0.4", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.4", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "etcd,vault,no-floating", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } }, "management-k8s-node-2": { "host_domain": "novalocal", "role": "none", "access_ip_v6": "", "access_ip_v4": "192.168.2.123", "public_ipv4": "192.168.2.123", "ip": "10.0.0.9", "use_host_domain": true, "consul_dc": "compute", "flavor": { "name": "4C8G50G", "id": "01745f89-22b9-466b-aaff-ca4f03daf5d8" }, "ansible_ssh_port": 22, "id": "7e825c72-b6a5-4859-9bed-065d8b91f3fd", "security_groups": [ "management-k8s", "management-k8s-worker" ], "publicly_routable": true, "access_ip": "192.168.2.123", "network": [ { "uuid": "c521680a-fc78-432e-9dfa-db5c1c18ba2b", "fixed_ip_v4": "10.0.0.9", "fixed_ip_v6": "", "floating_ip": "", "mac": "fa:16:3e:66:f5:e0", "access_network": "false", "port": "", "name": "k8s-garden-int-net" } ], "region": "RegionOne", "ansible_python_interpreter": "python", "ansible_ssh_host": "192.168.2.123", "ansible_ssh_user": "ubuntu", "key_pair": "kubernetes-management", "provider": "openstack", "private_ipv4": "10.0.0.9", "consul_is_server": false, "image": { "name": "ubuntu-16.04", "id": "cca27884-39e3-4c72-bf11-2b039d8580aa" }, "metadata": { "ssh_user": "ubuntu", "%": "3", "kubespray_groups": "kube-node,k8s-cluster,", "depends_on": "8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2" } } } }, "os_flavor=2C1G10G": { "hosts": [ "management-etcd-3", "management-etcd-2", "management-etcd-1" ] }, "os_metadata_depends_on=8bf0805d-0f49-4b0c-b049-5d7fe5a2ecd2": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2", "management-bastion-1" ] }, "dc=compute": { "hosts": [ "management-k8s-master-1", "management-k8s-master-2", "management-etcd-3", "management-etcd-2", "management-etcd-1", "management-k8s-node-3", "management-k8s-node-1", "management-k8s-node-2", "management-bastion-1" ] } }

Command used to invoke ansible: ansible-playbook -i inventory/ystacks-lab/hosts cluster.yml -b

Output of ansible run:

Anything else do we need to know:

Following Terraform For OpenStack to bootstrap cluster

terraform.tfvar file

cluster_name = "management"
network_name = "k8s-garden-int-net"
dns_nameservers = ["192.168.2.10", "8.8.8.8"]
floatingip_pool = "public"
external_net = "4ccc1131-299c-435c-920e-faf185ec7d18"
flavor_k8s_master="01745f89-22b9-466b-aaff-ca4f03daf5d8"
flavor_k8s_node="01745f89-22b9-466b-aaff-ca4f03daf5d8"
flavor_etcd="e6caf3f4-7f52-4815-a58d-2d6723ade339"
flavor_bastion="62174d2c-a445-4cac-a037-7f52ba8e5da4"
flavor_gfs_node="01745f89-22b9-466b-aaff-ca4f03daf5d8"
number_of_etcd=3
number_of_k8s_masters=2
number_of_k8s_masters_no_etcd=0
number_of_k8s_masters_no_floating_ip=0
number_of_k8s_masters_no_floating_ip_no_etcd=0
number_of_k8s_nodes=3
number_of_k8s_nodes_no_floating_ip=0
image="ubuntu-16.04"
image_gfs="ubuntu-16.04"
k8s_allowed_remote_ips=["0.0.0.0/0"]
k8s_allowed_egress_ips=["0.0.0.0/0"]

And also disabled the openstack cloud provider by comment out the cloud_provider: section in group_vars/all/all.yml file

jiangytcn commented 5 years ago

Sorry cannot provide the failed jobs log.

The problem is https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/terraform/openstack/modules/compute/main.tf#L134 this metadata setting. In my environment, deployed 3 etcd vms, and after deleted the etcd metadata from kubespray_groups, I could deploy kube cluster successfully.

holmsten commented 5 years ago

If you're running dedicated etcd nodes shouldn't you use number_of_k8s_masters_no_etcd instead of number_of_k8s_masters? I'd like to see some ansible output as well.

jiangytcn commented 5 years ago

thanks @holmsten , let me try

jiangytcn commented 5 years ago

just remembered, I tried that before, and all masters won't have floating ip associated. and had same failure

https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/terraform/openstack/modules/compute/main.tf#L293-L297

jiangytcn commented 5 years ago

try now, update result later

terraform variables

...
flavor_gfs_node="01745f89-22b9-466b-aaff-ca4f03daf5d8"
number_of_etcd=3
number_of_k8s_masters=0
number_of_k8s_masters_no_etcd=2
number_of_k8s_masters_no_floating_ip=0
number_of_k8s_masters_no_floating_ip_no_etcd=0
number_of_k8s_nodes=3
number_of_k8s_nodes_no_floating_ip=0
image="ubuntu-16.04"
image_gfs="ubuntu-16.04"
k8s_allowed_remote_ips=["0.0.0.0/0"]
k8s_allowed_egress_ips=["0.0.0.0/0"]
jiangytcn commented 5 years ago

@holmsten I ran the ansible tasks thru my local machine. if set number_of_k8s_masters_no_etcd but disable number_of_k8s_masters, the master nodes won't associate floating ip, leading to the ansible task hang there

jiangyt+ 33472 33469  0 22:15 pts/41   00:00:00 ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHo
stsFile=/dev/null -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,
gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User="ubuntu" -o ConnectTimeout=300 -o ControlPath=/home/jiangytcn/.ans
ible/cp/36a5063669 -tt 10.0.0.19 sudo -H -S -n  -u root /bin/sh -c 'echo BECOME-SUCCESS-elvvxyscwwcijlrheiwjswiecodrlkev ; cat /etc/os-r
elease'
jiangyt+ 33473 33470  0 22:15 pts/41   00:00:00 ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHo
stsFile=/dev/null -o StrictHostKeyChecking=no -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,
gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User="ubuntu" -o ConnectTimeout=300 -o ControlPath=/home/jiangytcn/.ans
ible/cp/58b27c8045 -tt 10.0.0.8 sudo -H -S -n  -u root /bin/sh -c 'echo BECOME-SUCCESS-cdtzvmvfcyqltdjkotdfishppedoponf ; cat /etc/os-re
lease'
+--------------------------------------+----------------------------+--------+---------------------------------------------+--------------+---------+
| ID                                   | Name                       | Status | Networks                                    | Image        | Flavor  |
+--------------------------------------+----------------------------+--------+---------------------------------------------+--------------+---------+
| 8018681f-46a2-4b5d-a9a2-62790b129e8b | management-k8s-node-2      | ACTIVE | k8s-garden-int-net=10.0.0.16, 192.168.2.115 | ubuntu-16.04 | 4C8G50G |
| b470528d-6fcd-4b6c-a0a2-4cf2fad35aa7 | management-k8s-master-ne-2 | ACTIVE | k8s-garden-int-net=10.0.0.19                | ubuntu-16.04 | 4C8G50G |
| a26d093d-41e2-4d3f-b8c2-956dbfab838d | management-k8s-node-3      | ACTIVE | k8s-garden-int-net=10.0.0.18, 192.168.2.114 | ubuntu-16.04 | 4C8G50G |
| c0e96bfb-6136-4ebe-87f6-8244083a0129 | management-k8s-node-1      | ACTIVE | k8s-garden-int-net=10.0.0.9, 192.168.2.106  | ubuntu-16.04 | 4C8G50G |
| 43a613e8-95fb-42c5-a0e7-78d9f22e7e0f | management-k8s-master-ne-1 | ACTIVE | k8s-garden-int-net=10.0.0.8                 | ubuntu-16.04 | 4C8G50G |
| 9e730cb0-508a-4709-8b5e-9426c9605a83 | management-bastion-1       | ACTIVE | k8s-garden-int-net=10.0.0.4, 192.168.2.116  | ubuntu-16.04 | 2C2G50G |
| 015c8a4f-1b4e-473b-b12a-ad7666bd2207 | management-etcd-3          | ACTIVE | k8s-garden-int-net=10.0.0.5                 | ubuntu-16.04 | 2C1G10G |
| 033d9859-4206-46a2-91f3-10fe47c6c3dc | management-etcd-2          | ACTIVE | k8s-garden-int-net=10.0.0.17                | ubuntu-16.04 | 2C1G10G |
| 329beec0-ba15-4fa7-8f01-f3f2099b340c | management-etcd-1          | ACTIVE | k8s-garden-int-net=10.0.0.11                | ubuntu-16.04 | 2C1G10G |
+--------------------------------------+----------------------------+--------+---------------------------------------------+--------------+---------+

$ cat group_vars/no-floating.yml                                                                                                                         /home/jiangytcn
ansible_ssh_common_args: "-o ProxyCommand='ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q ubuntu@192.168.2.116 {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}'"
jiangytcn commented 5 years ago

@holmsten you're right . after using number_of_k8s_masters_no_etcd to bootstrap master nodes, the playbook seems running, its not finished yet. just fyi I need to enable fips for thest masters without etcd PR created, please help review https://github.com/kubernetes-sigs/kubespray/pull/4657