kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
16.18k stars 6.48k forks source link

v2.23.0 metallb configurations not being rendered #10551

Closed SQLJames closed 7 months ago

SQLJames commented 1 year ago

Environment:

Kubespray version (commit) (git rev-parse --short HEAD): quay.io/kubespray/kubespray:v2.23.0

Network plugin used: calico

Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"): (https://gist.github.com/SQLJames/7887eb7dae8740d9d7c521af92c5030a#file-gistfile1-txt)

Command used to invoke ansible:

docker pull quay.io/kubespray/kubespray:v2.23.0
docker run --rm -it  --mount type=bind,source="$(pwd)"/inventory/mycluster,dst=/inventory --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa quay.io/kubespray/kubespray:v2.23.0 bash

ansible-playbook -i /inventory/hosts.yml --become --become-user=root -u ansible cluster.yml

metal lb doesn't seem to be laying out the networking configuration, here is a

cat /etc/kubernetes/metallb.yaml

https://gist.github.com/SQLJames/34d65508073a92642dde4f7dc13e1d23#file-gistfile1-txt

metallb_enabled: true
metallb_speaker_enabled: true
metallb_config:
  address_pools:
    primary:
      ip_range:
        - "172.16.43.0-172.16.43.128"
      auto_assign: true
  layer2:
    - primary

I believe this could be related to this change that was merged in for this release to fix a timeout. But i think its not outputting some of the critical templates like the layer2.yaml.j2, layer3.yaml.j2, pools.yaml.j2 https://github.com/kubernetes-sigs/kubespray/pull/9995/files

here is a list of the other files in the directory

ansible@k8s-cp-01:~$ ls -alth /etc/kubernetes/
total 244K
drwxr-xr-x  4 kube root 4.0K Oct 23 04:48 .
-rw-r--r--  1 root root  74K Oct 23 04:48 metallb.yaml
-rw-r--r--  1 root root 2.6K Oct 23 04:47 nodelocaldns-daemonset.yml
-rw-r--r--  1 root root  149 Oct 23 04:47 nodelocaldns-sa.yml
-rw-r--r--  1 root root 1.1K Oct 23 04:47 nodelocaldns-config.yml
-rw-r--r--  1 root root  763 Oct 23 04:47 dns-autoscaler-sa.yml
-rw-r--r--  1 root root  959 Oct 23 04:47 dns-autoscaler-clusterrolebinding.yml
-rw-r--r--  1 root root 1.2K Oct 23 04:47 dns-autoscaler-clusterrole.yml
-rw-r--r--  1 root root 2.6K Oct 23 04:47 dns-autoscaler.yml
-rw-r--r--  1 root root  539 Oct 23 04:47 coredns-svc.yml
-rw-r--r--  1 root root  190 Oct 23 04:47 coredns-sa.yml
-rw-r--r--  1 root root 3.2K Oct 23 04:47 coredns-deployment.yml
-rw-r--r--  1 root root  597 Oct 23 04:47 coredns-config.yml
-rw-r--r--  1 root root  451 Oct 23 04:47 coredns-clusterrolebinding.yml
-rw-r--r--  1 root root  473 Oct 23 04:47 coredns-clusterrole.yml
-rw-r--r--  1 root root  16K Oct 23 04:47 cni-flannel.yml
-rw-r--r--  1 root root  742 Oct 23 04:47 cni-flannel-rbac.yml
-rw-r-----  1 root root  408 Oct 23 04:45 node-crb.yml
-rw-------  1 root root 5.6K Oct 23 04:45 scheduler.conf
-rw-------  1 root root 2.0K Oct 23 04:45 kubelet.conf
-rw-------  1 root root 5.6K Oct 23 04:45 controller-manager.conf
-rw-------  1 root root 5.6K Oct 23 04:45 admin.conf
-rw-------  1 root root 2.0K Oct 23 04:45 kubelet.conf.8972.2023-10-23@04:45:52~
drwxr-xr-x  2 kube root 4.0K Oct 23 04:45 manifests
-rw-------  1 root root 5.6K Oct 23 04:45 scheduler.conf.8979.2023-10-23@04:45:52~
-rw-------  1 root root 5.6K Oct 23 04:45 controller-manager.conf.8965.2023-10-23@04:45:52~
-rw-------  1 root root 5.6K Oct 23 04:45 admin.conf.8958.2023-10-23@04:45:52~
drwxr-xr-x  2 root root 4.0K Oct 23 04:45 ssl
-rw-r-----  1 root root 3.7K Oct 23 04:45 kubeadm-config.yaml
-rw-r--r--  1 root root  204 Oct 23 04:45 kubescheduler-config.yaml
-rw-------  1 root root  793 Oct 23 04:44 kubelet-config.yaml
-rw-------  1 root root  523 Oct 23 04:44 kubelet.env
drwxr-xr-x 95 root root 4.0K Oct 23 04:44 ..
-rw-r--r--  1 root root  480 Oct 23 04:35 kubeadm-images.yaml
lrwxrwxrwx  1 root root   19 Oct 23 04:29 pki -> /etc/kubernetes/ssl
ansible@k8s-cp-01:~$
k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

VannTen commented 9 months ago

Can you provide your inventory and output of ansible ? Your gists ends in 404 /triage needs-information

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 7 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/kubespray/issues/10551#issuecomment-2043640495): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.