kubernetes / kubeadm

Aggregator for issues filed against kubeadm
Apache License 2.0
3.76k stars 716 forks source link

`kubeadm join` doesn't update `NO_PROXY` and `no_proxy` env variables #3099

Closed JKBGIT1 closed 3 months ago

JKBGIT1 commented 3 months ago

Is this a BUG REPORT or FEATURE REQUEST?

FEATURE REQUEST

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.4", GitCommit:"a51b3b711150f57ffc1f526a640ec058514ed596", GitTreeState:"clean", BuildDate:"2024-08-14T19:02:46Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

What happened?

kubeadm join doesn't update NO_PROXY and no_proxy environment variables in kube-proxy DaemonSet and static pods when joining a new node.

What you expected to happen?

running kubeadm join will update NO_PROXY and no_proxy environment variables in kube-proxy DaemonSet and static pods according to the values in systemctl show-environemnt or session env variables.

How to reproduce it (as minimally and precisely as possible)?

  1. Create 2 VMs in a cloud provider of your choice (I used Azure).

  2. Set up those VMs according to the documentation. That includes:

    • Turn off the swap (docs).
    • Allow IPv4 packet forwarding (docs).
    • Install and configure containerd according to the docs. I installed the containerd.io 1.7.20 using .deb package. Then generated a new /etc/containerd/config.toml by running containerd config default > /etc/containerd/config.toml, and eventually allowing SystemdCgroup in this configuration.
    • Install kubeadm, kubelet, and kubectl (docs).
  3. SSH into the master node and setup proxy env variables running the following commands (replace <> with valid values).

    export HTTP_PROXY=http://<proxy-URL>:<proxy-port>
    export HTTPS_PROXY=http://<proxy-URL>:<proxy-port>
    export NO_PROXY=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>
    export http_proxy=http://<proxy-URL>:<proxy-port>
    export https_proxy=http://<proxy-URL>:<proxy-port>
    export no_proxy=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>
    systemctl set-environment HTTP_PROXY=http://<proxy-URL>:<proxy-port> HTTPS_PROXY=http://<proxy-URL>:<proxy-port> NO_PROXY=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>
    http_proxy=http://<proxy-URL>:<proxy-port> https_proxy=http://<proxy-URL>:<proxy-port> 
    no_proxy=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>
  4. Run systemctl restart containerd

  5. Run kubeadm init

  6. Run the following commands to set up default kubeconfig:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  7. Install helm v3.15.3 following the docs

  8. Add cilium repo by running helm repo add cilium https://helm.cilium.io/

  9. Install Cilium v1.16.0 by running helm install cilium cilium/cilium --version 1.16.0 --namespace kube-system

  10. Make sure all pods (besides 1 cilium-operator that will be Pending) are up and healthy.

  11. Check the NO_PROXY and no_proxy environment variables in static pods and kube-proxy DaemonSet. They should correspond to the values set in the third step.

  12. SSH into the worker node and set the proxy env variables running the commands below. We added worker node public and private IPs in the NO_PROXY and no_proxy variables (replace <> with valid values).

    export HTTP_PROXY=http://<proxy-URL>:<proxy-port>
    export HTTPS_PROXY=http://<proxy-URL>:<proxy-port>
    export NO_PROXY=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>,<worker-node-private-IP>,<worker-node-public-IP>
    export http_proxy=http://<proxy-URL>:<proxy-port>
    export https_proxy=http://<proxy-URL>:<proxy-port>
    export no_proxy=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>,<worker-node-private-IP>,<worker-node-public-IP>
    systemctl set-environment HTTP_PROXY=http://<proxy-URL>:<proxy-port> HTTPS_PROXY=http://<proxy-URL>:<proxy-port> NO_PROXY=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>,<worker-node-private-IP>,<worker-node-public-IP>
    http_proxy=http://<proxy-URL>:<proxy-port> https_proxy=http://<proxy-URL>:<proxy-port> no_proxy=127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,<master-node-private-IP>,<master-node-public-IP>,<worker-node-private-IP>,<worker-node-public-IP>
  13. Run systemctl restart containerd on the worker node.

  14. User kubeadm join to connect the worker node to the cluster.

  15. Check NO_PROXY and no_proxy environment variables in the static pods and kube-proxy DaemonSet. You won’t see the worker's public and private IP there, because these env variables weren’t updated.

neolit123 commented 3 months ago

i called sudo NO_PROXY=foo.test kubeadm init ...

and that resulted in:

$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep NO_PROXY -C 1
    env:
    - name: NO_PROXY
      value: foo.test

$ k get ds kube-proxy -n kube-system -o yaml | grep NO_PROXY -C 1
              fieldPath: spec.nodeName
        - name: NO_PROXY
          value: foo.test
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30+", GitVersion:"v1.30.4-dirty", GitCommit:"a51b3b711150f57ffc1f526a640ec058514ed596", GitTreeState:"dirty", BuildDate:"2024-08-15T13:43:23Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}

this works as expected the kube-proxy DS is created only on init. it has the proxy env as expected.

i also tried join

$ sudo NO_PROXY=foo.test.join kubeadm join ...
...
$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep NO_PROXY -C 1
    env:
    - name: NO_PROXY
      value: foo.test.join

kubeadm join doesn't update NO_PROXY and no_proxy environment variables in kube-proxy DaemonSet and static pods when joining a new node.

why do you expect kubeadm join to modify the NO_PROXY env vars on the kube-proxy DS? that should not happen, as the kube-proxy DS is only created on init and upgrade on upgrade. join is not supposed to mutate it.

JKBGIT1 commented 3 months ago

Hello @neolit123 , thanks for the answer.

why do you expect kubeadm join to modify the NO_PROXY env vars on the kube-proxy DS? that should not happen, as the kube-proxy DS is only created on init and upgrade on upgrade. join is not supposed to mutate it.

Based on your questions I would assume nodes' private and public IPs are useless in the NO_PROXY and no_proxy env variables in the kube-proxy DaemonSet, and the only important values for this DS are 127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,. If that's the case, I don't think I'll need kubeadm join to update no proxy env variables in the kube-proxy DS.

$ sudo NO_PROXY=foo.test.join kubeadm join ...

I must have done something wrong because I tried this cmd before opening the issue, but it didn't work. Anyway, I plan to try it once again. Thanks!

neolit123 commented 3 months ago

kubeadm accepts a kubeproxyconfiguration on kubeadm init but other than that and the *_proxy env vars from the kubeadm init host, there is no way to configure the component. if you need more customizations for kube-proxy you could call kubeadm init --skip-phases=addon/kube-proxy and deploy your own kube-proxy daemonset or manage it with e.g. systemd on each host.

closing: working as intended.