kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
16.03k stars 6.45k forks source link

`kubeadm init` fails on master: node "..." not found (OpenStack) #4529

Closed fhemberger closed 4 years ago

fhemberger commented 5 years ago

Environment:

Kubespray version (commit) (git rev-parse --short HEAD): v2.9.0

Network plugin used:

Copy of your inventory file: https://gist.github.com/fhemberger/15de65d6ba3e1322616f974d7e145917#file-hosts-json (generated from Terraform)

Command used to invoke ansible: From inventory directory: ansible-playbook -i hosts --become ../../kubespray/cluster.yml

Output of ansible run: https://gist.github.com/fhemberger/15de65d6ba3e1322616f974d7e145917

Anything else do we need to know:

kubelet logs:

Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --allow-privileged has been deprecated, will be removed in a future version
Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --rotate-certificates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --node-status-update-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --max-pods has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --read-only-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --kubelet-cgroups has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --kube-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0415 09:05:48.087147   32738 flags.go:33] FLAG: --address="192.168.0.7"
I0415 09:05:48.087442   32738 flags.go:33] FLAG: --allow-privileged="true"
I0415 09:05:48.087719   32738 flags.go:33] FLAG: --allowed-unsafe-sysctls="[]"
I0415 09:05:48.087995   32738 flags.go:33] FLAG: --alsologtostderr="false"
I0415 09:05:48.088217   32738 flags.go:33] FLAG: --anonymous-auth="false"
I0415 09:05:48.088432   32738 flags.go:33] FLAG: --application-metrics-count-limit="100"
I0415 09:05:48.088657   32738 flags.go:33] FLAG: --authentication-token-webhook="true"
I0415 09:05:48.088875   32738 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
I0415 09:05:48.089098   32738 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"
I0415 09:05:48.089338   32738 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
I0415 09:05:48.089550   32738 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
I0415 09:05:48.089757   32738 flags.go:33] FLAG: --azure-container-registry-config=""
I0415 09:05:48.089958   32738 flags.go:33] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
I0415 09:05:48.090173   32738 flags.go:33] FLAG: --bootstrap-checkpoint-path=""
I0415 09:05:48.090385   32738 flags.go:33] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
I0415 09:05:48.090606   32738 flags.go:33] FLAG: --cert-dir="/var/lib/kubelet/pki"
I0415 09:05:48.090835   32738 flags.go:33] FLAG: --cgroup-driver="cgroupfs"
I0415 09:05:48.091042   32738 flags.go:33] FLAG: --cgroup-root=""
I0415 09:05:48.091246   32738 flags.go:33] FLAG: --cgroups-per-qos="true"
I0415 09:05:48.091451   32738 flags.go:33] FLAG: --chaos-chance="0"
I0415 09:05:48.091715   32738 flags.go:33] FLAG: --client-ca-file="/etc/kubernetes/ssl/ca.crt"
I0415 09:05:48.091932   32738 flags.go:33] FLAG: --cloud-config="/etc/kubernetes/cloud_config"
I0415 09:05:48.092165   32738 flags.go:33] FLAG: --cloud-provider="openstack"
I0415 09:05:48.092374   32738 flags.go:33] FLAG: --cluster-dns="[10.233.0.3]"
I0415 09:05:48.092591   32738 flags.go:33] FLAG: --cluster-domain="cluster.local"
I0415 09:05:48.092792   32738 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bin"
I0415 09:05:48.092998   32738 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/net.d"
I0415 09:05:48.093016   32738 flags.go:33] FLAG: --config=""
I0415 09:05:48.093024   32738 flags.go:33] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
I0415 09:05:48.093035   32738 flags.go:33] FLAG: --container-log-max-files="5"
I0415 09:05:48.093048   32738 flags.go:33] FLAG: --container-log-max-size="10Mi"
I0415 09:05:48.093056   32738 flags.go:33] FLAG: --container-runtime="docker"
I0415 09:05:48.093065   32738 flags.go:33] FLAG: --container-runtime-endpoint="unix:///var/run/dockershim.sock"
I0415 09:05:48.093074   32738 flags.go:33] FLAG: --containerd="unix:///var/run/containerd.sock"
I0415 09:05:48.093083   32738 flags.go:33] FLAG: --containerized="false"
I0415 09:05:48.093092   32738 flags.go:33] FLAG: --contention-profiling="false"
I0415 09:05:48.093101   32738 flags.go:33] FLAG: --cpu-cfs-quota="true"
I0415 09:05:48.093109   32738 flags.go:33] FLAG: --cpu-cfs-quota-period="100ms"
I0415 09:05:48.093118   32738 flags.go:33] FLAG: --cpu-manager-policy="none"
I0415 09:05:48.093126   32738 flags.go:33] FLAG: --cpu-manager-reconcile-period="10s"
I0415 09:05:48.093135   32738 flags.go:33] FLAG: --docker="unix:///var/run/docker.sock"
I0415 09:05:48.093145   32738 flags.go:33] FLAG: --docker-endpoint="unix:///var/run/docker.sock"
I0415 09:05:48.093153   32738 flags.go:33] FLAG: --docker-env-metadata-whitelist=""
I0415 09:05:48.093168   32738 flags.go:33] FLAG: --docker-only="false"
I0415 09:05:48.093177   32738 flags.go:33] FLAG: --docker-root="/var/lib/docker"
I0415 09:05:48.093186   32738 flags.go:33] FLAG: --docker-tls="false"
I0415 09:05:48.093194   32738 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
I0415 09:05:48.093202   32738 flags.go:33] FLAG: --docker-tls-cert="cert.pem"
I0415 09:05:48.093211   32738 flags.go:33] FLAG: --docker-tls-key="key.pem"
I0415 09:05:48.093219   32738 flags.go:33] FLAG: --dynamic-config-dir=""
I0415 09:05:48.093230   32738 flags.go:33] FLAG: --enable-controller-attach-detach="true"
I0415 09:05:48.093239   32738 flags.go:33] FLAG: --enable-debugging-handlers="true"
I0415 09:05:48.093247   32738 flags.go:33] FLAG: --enable-load-reader="false"
I0415 09:05:48.093255   32738 flags.go:33] FLAG: --enable-server="true"
I0415 09:05:48.093263   32738 flags.go:33] FLAG: --enforce-node-allocatable="[]"
I0415 09:05:48.093281   32738 flags.go:33] FLAG: --event-burst="10"
I0415 09:05:48.093290   32738 flags.go:33] FLAG: --event-qps="5"
I0415 09:05:48.093298   32738 flags.go:33] FLAG: --event-storage-age-limit="default=0"
I0415 09:05:48.093307   32738 flags.go:33] FLAG: --event-storage-event-limit="default=0"
I0415 09:05:48.093319   32738 flags.go:33] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%"
I0415 09:05:48.093348   32738 flags.go:33] FLAG: --eviction-max-pod-grace-period="0"
I0415 09:05:48.093357   32738 flags.go:33] FLAG: --eviction-minimum-reclaim=""
I0415 09:05:48.093369   32738 flags.go:33] FLAG: --eviction-pressure-transition-period="5m0s"
I0415 09:05:48.093378   32738 flags.go:33] FLAG: --eviction-soft=""
I0415 09:05:48.093387   32738 flags.go:33] FLAG: --eviction-soft-grace-period=""
I0415 09:05:48.093395   32738 flags.go:33] FLAG: --exit-on-lock-contention="false"
I0415 09:05:48.093403   32738 flags.go:33] FLAG: --experimental-allocatable-ignore-eviction="false"
I0415 09:05:48.093411   32738 flags.go:33] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
I0415 09:05:48.093422   32738 flags.go:33] FLAG: --experimental-check-node-capabilities-before-mount="false"
I0415 09:05:48.093430   32738 flags.go:33] FLAG: --experimental-dockershim="false"
I0415 09:05:48.093438   32738 flags.go:33] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim"
I0415 09:05:48.093447   32738 flags.go:33] FLAG: --experimental-fail-swap-on="true"
I0415 09:05:48.093455   32738 flags.go:33] FLAG: --experimental-kernel-memcg-notification="false"
I0415 09:05:48.093463   32738 flags.go:33] FLAG: --experimental-mounter-path=""
I0415 09:05:48.093471   32738 flags.go:33] FLAG: --fail-swap-on="true"
I0415 09:05:48.093483   32738 flags.go:33] FLAG: --feature-gates=""
I0415 09:05:48.093495   32738 flags.go:33] FLAG: --file-check-frequency="20s"
I0415 09:05:48.093503   32738 flags.go:33] FLAG: --global-housekeeping-interval="1m0s"
I0415 09:05:48.093512   32738 flags.go:33] FLAG: --hairpin-mode="promiscuous-bridge"
I0415 09:05:48.093521   32738 flags.go:33] FLAG: --healthz-bind-address="127.0.0.1"
I0415 09:05:48.093531   32738 flags.go:33] FLAG: --healthz-port="10248"
I0415 09:05:48.093539   32738 flags.go:33] FLAG: --help="false"
I0415 09:05:48.093548   32738 flags.go:33] FLAG: --host-ipc-sources="[*]"
I0415 09:05:48.093559   32738 flags.go:33] FLAG: --host-network-sources="[*]"
I0415 09:05:48.093572   32738 flags.go:33] FLAG: --host-pid-sources="[*]"
I0415 09:05:48.093581   32738 flags.go:33] FLAG: --hostname-override="dev-de-cloud-k8s-master-1"
I0415 09:05:48.093590   32738 flags.go:33] FLAG: --housekeeping-interval="10s"
I0415 09:05:48.093599   32738 flags.go:33] FLAG: --http-check-frequency="20s"
I0415 09:05:48.093607   32738 flags.go:33] FLAG: --image-gc-high-threshold="85"
I0415 09:05:48.093615   32738 flags.go:33] FLAG: --image-gc-low-threshold="80"
I0415 09:05:48.093623   32738 flags.go:33] FLAG: --image-pull-progress-deadline="1m0s"
I0415 09:05:48.093636   32738 flags.go:33] FLAG: --image-service-endpoint=""
I0415 09:05:48.093644   32738 flags.go:33] FLAG: --iptables-drop-bit="15"
I0415 09:05:48.093652   32738 flags.go:33] FLAG: --iptables-masquerade-bit="14"
I0415 09:05:48.093660   32738 flags.go:33] FLAG: --keep-terminated-pod-volumes="false"
I0415 09:05:48.093668   32738 flags.go:33] FLAG: --kube-api-burst="10"
I0415 09:05:48.093677   32738 flags.go:33] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0415 09:05:48.093686   32738 flags.go:33] FLAG: --kube-api-qps="5"
I0415 09:05:48.093694   32738 flags.go:33] FLAG: --kube-reserved="cpu=200m,memory=512M"
I0415 09:05:48.093709   32738 flags.go:33] FLAG: --kube-reserved-cgroup=""
I0415 09:05:48.093716   32738 flags.go:33] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
I0415 09:05:48.093725   32738 flags.go:33] FLAG: --kubelet-cgroups="/systemd/system.slice"
I0415 09:05:48.093757   32738 flags.go:33] FLAG: --lock-file=""
I0415 09:05:48.093765   32738 flags.go:33] FLAG: --log-backtrace-at=":0"
I0415 09:05:48.093776   32738 flags.go:33] FLAG: --log-cadvisor-usage="false"
I0415 09:05:48.093784   32738 flags.go:33] FLAG: --log-dir=""
I0415 09:05:48.093792   32738 flags.go:33] FLAG: --log-file=""
I0415 09:05:48.093803   32738 flags.go:33] FLAG: --log-flush-frequency="5s"
I0415 09:05:48.093812   32738 flags.go:33] FLAG: --logtostderr="true"
I0415 09:05:48.093820   32738 flags.go:33] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
I0415 09:05:48.093830   32738 flags.go:33] FLAG: --make-iptables-util-chains="true"
I0415 09:05:48.093838   32738 flags.go:33] FLAG: --manifest-url=""
I0415 09:05:48.093846   32738 flags.go:33] FLAG: --manifest-url-header=""
I0415 09:05:48.093860   32738 flags.go:33] FLAG: --master-service-namespace="default"
I0415 09:05:48.093869   32738 flags.go:33] FLAG: --max-open-files="1000000"
I0415 09:05:48.093880   32738 flags.go:33] FLAG: --max-pods="110"
I0415 09:05:48.093889   32738 flags.go:33] FLAG: --maximum-dead-containers="-1"
I0415 09:05:48.093898   32738 flags.go:33] FLAG: --maximum-dead-containers-per-container="1"
I0415 09:05:48.093906   32738 flags.go:33] FLAG: --minimum-container-ttl-duration="0s"
I0415 09:05:48.093914   32738 flags.go:33] FLAG: --minimum-image-ttl-duration="2m0s"
I0415 09:05:48.093922   32738 flags.go:33] FLAG: --network-plugin="cni"
I0415 09:05:48.093930   32738 flags.go:33] FLAG: --network-plugin-mtu="0"
I0415 09:05:48.093938   32738 flags.go:33] FLAG: --node-ip="192.168.0.7"
I0415 09:05:48.093950   32738 flags.go:33] FLAG: --node-labels="node-role.kubernetes.io/master="
I0415 09:05:48.093965   32738 flags.go:33] FLAG: --node-status-max-images="50"
I0415 09:05:48.093973   32738 flags.go:33] FLAG: --node-status-update-frequency="10s"
I0415 09:05:48.093982   32738 flags.go:33] FLAG: --non-masquerade-cidr="10.0.0.0/8"
I0415 09:05:48.093990   32738 flags.go:33] FLAG: --oom-score-adj="-999"
I0415 09:05:48.093999   32738 flags.go:33] FLAG: --pod-cidr=""
I0415 09:05:48.094007   32738 flags.go:33] FLAG: --pod-infra-container-image="gcr.io/google_containers/pause-amd64:3.1"
I0415 09:05:48.094016   32738 flags.go:33] FLAG: --pod-manifest-path="/etc/kubernetes/manifests"
I0415 09:05:48.094025   32738 flags.go:33] FLAG: --pod-max-pids="-1"
I0415 09:05:48.094033   32738 flags.go:33] FLAG: --pods-per-core="0"
I0415 09:05:48.094041   32738 flags.go:33] FLAG: --port="10250"
I0415 09:05:48.094050   32738 flags.go:33] FLAG: --protect-kernel-defaults="false"
I0415 09:05:48.094058   32738 flags.go:33] FLAG: --provider-id=""
I0415 09:05:48.094066   32738 flags.go:33] FLAG: --qos-reserved=""
I0415 09:05:48.094075   32738 flags.go:33] FLAG: --read-only-port="0"
I0415 09:05:48.094083   32738 flags.go:33] FLAG: --really-crash-for-testing="false"
I0415 09:05:48.094095   32738 flags.go:33] FLAG: --redirect-container-streaming="false"
I0415 09:05:48.094103   32738 flags.go:33] FLAG: --register-node="true"
I0415 09:05:48.094111   32738 flags.go:33] FLAG: --register-schedulable="true"
I0415 09:05:48.094119   32738 flags.go:33] FLAG: --register-with-taints=""
I0415 09:05:48.094129   32738 flags.go:33] FLAG: --registry-burst="10"
I0415 09:05:48.094138   32738 flags.go:33] FLAG: --registry-qps="5"
I0415 09:05:48.094145   32738 flags.go:33] FLAG: --resolv-conf="/run/systemd/resolve/resolv.conf"
I0415 09:05:48.094155   32738 flags.go:33] FLAG: --root-dir="/var/lib/kubelet"
I0415 09:05:48.094163   32738 flags.go:33] FLAG: --rotate-certificates="true"
I0415 09:05:48.094171   32738 flags.go:33] FLAG: --rotate-server-certificates="false"
I0415 09:05:48.094179   32738 flags.go:33] FLAG: --runonce="false"
I0415 09:05:48.094187   32738 flags.go:33] FLAG: --runtime-cgroups="/systemd/system.slice"
I0415 09:05:48.094196   32738 flags.go:33] FLAG: --runtime-request-timeout="2m0s"
I0415 09:05:48.094204   32738 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
I0415 09:05:48.094213   32738 flags.go:33] FLAG: --serialize-image-pulls="true"
I0415 09:05:48.094221   32738 flags.go:33] FLAG: --stderrthreshold="2"
I0415 09:05:48.094233   32738 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
I0415 09:05:48.094241   32738 flags.go:33] FLAG: --storage-driver-db="cadvisor"
I0415 09:05:48.094250   32738 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
I0415 09:05:48.094258   32738 flags.go:33] FLAG: --storage-driver-password="root"
I0415 09:05:48.094266   32738 flags.go:33] FLAG: --storage-driver-secure="false"
I0415 09:05:48.094274   32738 flags.go:33] FLAG: --storage-driver-table="stats"
I0415 09:05:48.094282   32738 flags.go:33] FLAG: --storage-driver-user="root"
I0415 09:05:48.094290   32738 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
I0415 09:05:48.094299   32738 flags.go:33] FLAG: --sync-frequency="1m0s"
I0415 09:05:48.094307   32738 flags.go:33] FLAG: --system-cgroups=""
I0415 09:05:48.094315   32738 flags.go:33] FLAG: --system-reserved=""
I0415 09:05:48.094323   32738 flags.go:33] FLAG: --system-reserved-cgroup=""
I0415 09:05:48.094331   32738 flags.go:33] FLAG: --tls-cert-file=""
I0415 09:05:48.094339   32738 flags.go:33] FLAG: --tls-cipher-suites="[]"
I0415 09:05:48.094351   32738 flags.go:33] FLAG: --tls-min-version=""
I0415 09:05:48.094359   32738 flags.go:33] FLAG: --tls-private-key-file=""
I0415 09:05:48.094370   32738 flags.go:33] FLAG: --v="2"
I0415 09:05:48.094378   32738 flags.go:33] FLAG: --version="false"
I0415 09:05:48.094391   32738 flags.go:33] FLAG: --vmodule=""
I0415 09:05:48.094400   32738 flags.go:33] FLAG: --volume-plugin-dir="/var/lib/kubelet/volume-plugins"
I0415 09:05:48.094409   32738 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
I0415 09:05:48.094471   32738 feature_gate.go:206] feature gates: &{map[]}
W0415 09:05:48.094498   32738 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
W0415 09:05:48.094520   32738 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
I0415 09:05:48.094613   32738 feature_gate.go:206] feature gates: &{map[]}
I0415 09:05:48.114708   32738 mount_linux.go:180] Detected OS with systemd
I0415 09:05:48.114756   32738 server.go:407] Version: v1.13.5
I0415 09:05:48.114850   32738 feature_gate.go:206] feature gates: &{map[]}
I0415 09:05:48.114961   32738 feature_gate.go:206] feature gates: &{map[]}
W0415 09:05:48.115017   32738 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
W0415 09:05:48.115081   32738 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
W0415 09:05:48.115266   32738 plugins.go:118] WARNING: openstack built-in cloud provider is now deprecated. Please use 'external' cloud provider for openstack: https://github.com/kubernetes/cloud-provider-openstack
I0415 09:05:48.572429   32738 server.go:525] Successfully initialized cloud provider: "openstack" from the config file: "/etc/kubernetes/cloud_config"
I0415 09:05:48.680778   32738 server.go:791] cloud provider determined current node name to be dev-de-cloud-k8s-master-1
I0415 09:05:48.685303   32738 bootstrap.go:61] Kubeconfig /etc/kubernetes/kubelet.conf exists and is valid, skipping bootstrap
I0415 09:05:48.686721   32738 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
I0415 09:05:48.688012   32738 server.go:584] Starting client certificate rotation.
I0415 09:05:48.688041   32738 certificate_manager.go:234] Certificate rotation is enabled.
I0415 09:05:48.688479   32738 certificate_manager.go:471] Certificate expiration is 2020-04-14 08:54:28 +0000 UTC, rotation deadline is 2020-02-12 06:09:17.588175263 +0000 UTC
I0415 09:05:48.688697   32738 certificate_manager.go:240] Waiting 7269h3m28.899486569s for next certificate rotation
I0415 09:05:48.689455   32738 manager.go:155] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
I0415 09:05:48.721181   32738 fs.go:142] Filesystem UUIDs: map[91bfde43-9e70-45e9-b215-ab15bfdf4c92:/dev/sda1 C40D-6A21:/dev/sda15]
I0415 09:05:48.721217   32738 fs.go:143] Filesystem partitions: map[tmpfs:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/ major:8 minor:1 fsType:ext4 blockSize:0} shm:{mountpoint:/var/lib/docker/containers/5df020f1f3a7901b2df0ca93c833ee99bb5fe4f5fbae02fb9d6c90390dedaa5b/mounts/shm major:0 minor:59 fsType:tmpfs blockSize:0}]
I0415 09:05:48.724078   32738 manager.go:229] Machine: {NumCores:2 CpuFrequency:2099998 MemoryCapacity:8364285952 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:aa284e35169048ccafff8182dcd546d2 SystemUUID:AA284E35-1690-48CC-AFFF-8182DCD546D2 BootID:d0950fbb-02bc-40a4-a8b8-f0c15598de63 Filesystems:[{Device:shm DeviceMajor:0 DeviceMinor:59 Capacity:67108864 Type:vfs Inodes:1021031 HasInodes:true} {Device:tmpfs DeviceMajor:0 DeviceMinor:24 Capacity:836431872 Type:vfs Inodes:1021031 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:51848359936 Type:vfs Inodes:6451200 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:53687091200 Scheduler:cfq}] NetworkDevices:[{Name:ens3 MacAddress:fa:16:3e:69:0d:dc Speed:-1 Mtu:1450}] Topology:[{Id:0 Memory:8364285952 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified Level:3}]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[{Size:16777216 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0415 09:05:48.726601   32738 manager.go:235] Version: {KernelVersion:4.15.0-23-generic ContainerOsVersion:Ubuntu 18.04 LTS DockerVersion:18.06.2-ce DockerAPIVersion:1.38 CadvisorVersion: CadvisorRevision:}
I0415 09:05:48.726860   32738 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0415 09:05:48.727433   32738 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
I0415 09:05:48.727456   32738 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:512 scale:6} d:{Dec:<nil>} s:512M Format:DecimalSI}] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
I0415 09:05:48.727786   32738 container_manager_linux.go:272] Creating device plugin manager: true
I0415 09:05:48.727797   32738 manager.go:109] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
I0415 09:05:48.728024   32738 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0415 09:05:48.728252   32738 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0415 09:05:48.728291   32738 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
I0415 09:05:48.728399   32738 state_checkpoint.go:100] [cpumanager] state checkpoint: restored state from checkpoint
I0415 09:05:48.728418   32738 state_checkpoint.go:101] [cpumanager] state checkpoint: defaultCPUSet:
I0415 09:05:48.728616   32738 server.go:791] cloud provider determined current node name to be dev-de-cloud-k8s-master-1
I0415 09:05:48.728642   32738 server.go:941] Using root directory: /var/lib/kubelet
I0415 09:05:48.728784   32738 kubelet.go:397] cloud provider determined current node name to be dev-de-cloud-k8s-master-1
I0415 09:05:48.728885   32738 kubelet.go:281] Adding pod path: /etc/kubernetes/manifests
I0415 09:05:48.728996   32738 file.go:68] Watching path "/etc/kubernetes/manifests"
I0415 09:05:48.729026   32738 kubelet.go:306] Watching apiserver
E0415 09:05:48.740906   32738 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.7:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddev-de-cloud-k8s-master-1&limit=500&resourceVersion=0: dial tcp 192.168.0.7:6443: connect: connection refused
E0415 09:05:48.741125   32738 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddev-de-cloud-k8s-master-1&limit=500&resourceVersion=0: dial tcp 192.168.0.7:6443: connect: connection refused
E0415 09:05:48.741626   32738 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.0.7:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.0.7:6443: connect: connection refused
I0415 09:05:48.743101   32738 client.go:75] Connecting to docker on unix:///var/run/docker.sock
I0415 09:05:48.743294   32738 client.go:104] Start docker client with request timeout=2m0s
W0415 09:05:48.752689   32738 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0415 09:05:48.752865   32738 docker_service.go:236] Hairpin mode set to "hairpin-veth"
W0415 09:05:48.753080   32738 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
W0415 09:05:48.756415   32738 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
W0415 09:05:48.756613   32738 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
I0415 09:05:48.756723   32738 plugins.go:159] Loaded network plugin "cni"
I0415 09:05:48.756819   32738 docker_service.go:251] Docker cri networking managed by cni
I0415 09:05:48.766800   32738 docker_service.go:256] Docker Info: &{ID:MRUI:PZKM:LLNZ:JINY:6MM2:LYNJ:RLDN:PYCZ:AG3H:A6G2:2UQL:FXJF Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:59 SystemTime:2019-04-15T09:05:48.758241929Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-23-generic OperatingSystem:Ubuntu 18.04 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002700e0 NCPU:2 MemTotal:8364285952 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:dev-de-cloud-k8s-master-1 Labels:[] ExperimentalBuild:false ServerVersion:18.06.2-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:468a545b9edcd5932818eb9de8e72413e616e86e Expected:468a545b9edcd5932818eb9de8e72413e616e86e} RuncCommit:{ID:a592beb5bc4c4092b1b1bac971afed27687340c5 Expected:69663f0bd4b60df09991c08812a60108003fa340} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default]}
I0415 09:05:48.766955   32738 docker_service.go:269] Setting cgroupDriver to cgroupfs
I0415 09:05:48.767113   32738 kubelet.go:636] Starting the GRPC server for the docker CRI shim.
I0415 09:05:48.767304   32738 container_manager_linux.go:115] Configure resource-only container "/systemd/system.slice" with memory limit: 5855000166
I0415 09:05:48.767322   32738 docker_server.go:59] Start dockershim grpc server
I0415 09:05:48.784778   32738 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.06.2-ce, apiVersion: 1.38.0
I0415 09:05:48.785180   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/aws-ebs"
I0415 09:05:48.785206   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/empty-dir"
I0415 09:05:48.785214   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/gce-pd"
I0415 09:05:48.785222   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/git-repo"
I0415 09:05:48.785230   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/host-path"
I0415 09:05:48.785237   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/nfs"
I0415 09:05:48.785246   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/secret"
I0415 09:05:48.785254   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/iscsi"
I0415 09:05:48.785264   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/glusterfs"
I0415 09:05:48.785366   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/rbd"
I0415 09:05:48.785514   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/cinder"
I0415 09:05:48.785699   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/quobyte"
I0415 09:05:48.785753   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/cephfs"
I0415 09:05:48.785857   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/downward-api"
I0415 09:05:48.785867   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/fc"
I0415 09:05:48.785876   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/flocker"
I0415 09:05:48.785961   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-file"
I0415 09:05:48.785973   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/configmap"
I0415 09:05:48.786170   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/vsphere-volume"
I0415 09:05:48.786188   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/azure-disk"
I0415 09:05:48.786197   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/photon-pd"
I0415 09:05:48.786206   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/projected"
I0415 09:05:48.786388   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/portworx-volume"
I0415 09:05:48.786407   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/scaleio"
I0415 09:05:48.786417   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/local-volume"
I0415 09:05:48.786540   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/storageos"
I0415 09:05:48.786572   32738 plugins.go:547] Loaded volume plugin "kubernetes.io/csi"
I0415 09:05:48.790268   32738 server.go:999] Started kubelet
E0415 09:05:48.790578   32738 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
I0415 09:05:48.790877   32738 server.go:137] Starting to listen on 192.168.0.7:10250
E0415 09:05:48.791744   32738 event.go:212] Unable to write event: 'Post https://192.168.0.7:6443/api/v1/namespaces/default/events: dial tcp 192.168.0.7:6443: connect: connection refused' (may retry after sleeping)
I0415 09:05:48.792248   32738 server.go:333] Adding debug handlers to kubelet server.
I0415 09:05:48.792430   32738 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0415 09:05:48.794833   32738 status_manager.go:152] Starting to sync pod status with apiserver
I0415 09:05:48.794949   32738 kubelet.go:1829] Starting kubelet main sync loop.
I0415 09:05:48.795070   32738 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
I0415 09:05:48.795284   32738 volume_manager.go:246] The desired_state_of_world populator starts
I0415 09:05:48.795400   32738 volume_manager.go:248] Starting Kubelet Volume Manager
I0415 09:05:48.797291   32738 desired_state_of_world_populator.go:130] Desired state populator starts to run
W0415 09:05:48.797595   32738 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
E0415 09:05:48.798235   32738 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
I0415 09:05:48.826555   32738 factory.go:356] Registering Docker factory
I0415 09:05:48.827769   32738 factory.go:54] Registering systemd factory
I0415 09:05:48.828272   32738 factory.go:97] Registering Raw factory
I0415 09:05:48.828593   32738 manager.go:1222] Started watching for new ooms in manager
I0415 09:05:48.829421   32738 manager.go:365] Starting recovery of all containers
I0415 09:05:48.886652   32738 manager.go:370] Recovery completed
I0415 09:05:48.903215   32738 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet]
I0415 09:05:48.903401   32738 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
E0415 09:05:48.903899   32738 kubelet.go:2266] node "dev-de-cloud-k8s-master-1" not found
I0415 09:05:48.930633   32738 cloud_request_manager.go:113] Node addresses from cloud provider for node "dev-de-cloud-k8s-master-1" not collected
I0415 09:05:48.949823   32738 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach
E0415 09:05:49.001257   32738 kubelet_node_status.go:68] Unable to construct v1.Node object for kubelet: failed to get instance ID from cloud provider: instance not found
E0415 09:05:49.004199   32738 kubelet.go:2266] node "dev-de-cloud-k8s-master-1" not found
F0415 09:05:49.027382   32738 kubelet.go:1379] Kubelet failed to get node info: failed to get instance ID from cloud provider: instance not found

Terraform config to create resources on OpenStack:

cluster_name = "dev-de-cloud"
public_key_path = "~/.ssh/id_rsa.pub"
image = "Ubuntu 18.04 (Bionic Beaver)"
ssh_user = "ubuntu"

# standalone etcds
number_of_etcd = 0

# masters
number_of_k8s_masters = 1
number_of_k8s_masters_no_etcd = 0
number_of_k8s_masters_no_floating_ip = 0
number_of_k8s_masters_no_floating_ip_no_etcd = 0
flavor_k8s_master = "6d66d5e3-847a-4d1f-a0f7-498fbed84e92"

# nodes
number_of_k8s_nodes = 2
number_of_k8s_nodes_no_floating_ip = 0
flavor_k8s_node = "3a6ffafc-1f45-4100-8aa2-f8e754b650d1"

# internal network
network_name = "dev-de-cloud"
subnet_cidr = "192.168.0.0/16"

# public network
external_net = "e09fd0a9-abc7-462b-9faf-2de6d1d36ef5"
floatingip_pool = "public"

# 0|1 bastion nodes
number_of_bastions = 0
#flavor_bastion = "<UUID>"
bastion_allowed_remote_ips = ["0.0.0.0/0"]

Also happened with Kubespray v2.8.4/Kubernetes v1.12.5, see: https://github.com/kubernetes/kubeadm/issues/1497

fhemberger commented 5 years ago

/sig openstack

fhemberger commented 5 years ago

Can confirm the same behavior with latest Debian Stretch and Fedora Cloud 29. Any idea what might cause it or how to get to the bottom of this?

Any further logs, configs, etc. that might be helpful for debugging?

holmsten commented 5 years ago

Can you check logs from controller manager pod, if it's up and running at that stage? Is it your own OpenStack cloud or are you using a public provider?

fhemberger commented 5 years ago

kubelet fails to start on the master node (see attached logs), so there are no pods running. Only the external etcd Docker container is up an running at this point.

OpenStack ('Queens' AFAIK) is running on premise.

brunovlucena commented 5 years ago

The problem happens to me too in ubuntu 18.04.

kubespray: tag: v2.10.0, origin/release-2.10

i got it solved after upgrading the libs and running it again pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs -n1 sudo pip install -U

fhemberger commented 5 years ago

Unfortunately, this didn't solve it for me. To make sure there are no other side effects, I'm running the entire setup in a Docker container:

FROM ubuntu:18.04

RUN export DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
  && apt-get install -y \
    python \
    openssh-client \
    iputils-ping \
    python-pip \
    software-properties-common \
  && rm -rf /var/lib/apt/lists/*

RUN pip install \
  ansible>=2.7.6 \
  jinja2>=2.9.6 \
  netaddr \
  pbr>=1.6 \
  hvac \
  jmespath \
  ruamel.yaml \
  python-openstackclient

RUN mkdir -p /root/ansible
WORKDIR /root/ansible
docker build -t ansible-kubespray .
docker run --rm -ti \
  -v $(pwd):/root/ansible \
  -v ~/.ssh/id_rsa:/root/.ssh/id_rsa:ro \
  -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub:ro \
  ansible-kubespray \
  bash
fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fhemberger commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubespray/issues/4529#issuecomment-570065871): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
ChoppinBlockParty commented 4 years ago

@fhemberger did you find any solution?

fhemberger commented 4 years ago

@ChoppinBlockParty Kind of. We stopped using kubespray for our setup. 🤷

trydalch commented 1 year ago

@fhemberger What did you decide to use instead?

fhemberger commented 1 year ago

@trydalch Went with RKE (Rancher Kubernetes Engine), but that was over two years ago. There may be other viable solutions as well by now.

ChoppinBlockParty commented 1 year ago

We ended up writing our own scripts for the setup. But the setup is not too complicated and does not change often.