Closed JKBGIT1 closed 3 months ago
i called sudo NO_PROXY=foo.test kubeadm init ...
and that resulted in:
$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep NO_PROXY -C 1
env:
- name: NO_PROXY
value: foo.test
$ k get ds kube-proxy -n kube-system -o yaml | grep NO_PROXY -C 1
fieldPath: spec.nodeName
- name: NO_PROXY
value: foo.test
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30+", GitVersion:"v1.30.4-dirty", GitCommit:"a51b3b711150f57ffc1f526a640ec058514ed596", GitTreeState:"dirty", BuildDate:"2024-08-15T13:43:23Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
this works as expected the kube-proxy DS is created only on init
. it has the proxy env as expected.
i also tried join
$ sudo NO_PROXY=foo.test.join kubeadm join ...
...
$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep NO_PROXY -C 1
env:
- name: NO_PROXY
value: foo.test.join
kubeadm join doesn't update NO_PROXY and no_proxy environment variables in kube-proxy DaemonSet and static pods when joining a new node.
why do you expect kubeadm join
to modify the NO_PROXY env vars on the kube-proxy DS?
that should not happen, as the kube-proxy DS is only created on init
and upgrade on upgrade
. join
is not supposed to mutate it.
Hello @neolit123 , thanks for the answer.
why do you expect kubeadm join to modify the NO_PROXY env vars on the kube-proxy DS? that should not happen, as the kube-proxy DS is only created on init and upgrade on upgrade. join is not supposed to mutate it.
Based on your questions I would assume nodes' private and public IPs are useless in the NO_PROXY
and no_proxy
env variables in the kube-proxy
DaemonSet, and the only important values for this DS are 127.0.0.1/8,localhost,cluster.local,<pod-CIDR>,<service-CIDR>,svc,
. If that's the case, I don't think I'll need kubeadm join
to update no proxy env variables in the kube-proxy
DS.
$ sudo NO_PROXY=foo.test.join kubeadm join ...
I must have done something wrong because I tried this cmd before opening the issue, but it didn't work. Anyway, I plan to try it once again. Thanks!
kubeadm accepts a kubeproxyconfiguration on kubeadm init
but other than that and the *_proxy env vars from the kubeadm init
host, there is no way to configure the component. if you need more customizations for kube-proxy you could call kubeadm init --skip-phases=addon/kube-proxy
and deploy your own kube-proxy daemonset or manage it with e.g. systemd on each host.
closing: working as intended.
Is this a BUG REPORT or FEATURE REQUEST?
FEATURE REQUEST
Versions
kubeadm version (use
kubeadm version
):Environment:
kubectl version
):Azure
uname -a
):containerd v1.7.20
Cilium v1.16.0
What happened?
kubeadm join
doesn't updateNO_PROXY
andno_proxy
environment variables inkube-proxy
DaemonSet and static pods when joining a new node.What you expected to happen?
running
kubeadm join
will updateNO_PROXY
andno_proxy
environment variables inkube-proxy
DaemonSet and static pods according to the values insystemctl show-environemnt
or session env variables.How to reproduce it (as minimally and precisely as possible)?
Create 2 VMs in a cloud provider of your choice (I used Azure).
Set up those VMs according to the documentation. That includes:
containerd
according to the docs. I installed thecontainerd.io 1.7.20
using.deb
package. Then generated a new/etc/containerd/config.toml
by runningcontainerd config default > /etc/containerd/config.toml
, and eventually allowingSystemdCgroup
in this configuration.SSH into the master node and setup proxy env variables running the following commands (replace
<>
with valid values).Run
systemctl restart containerd
Run
kubeadm init
Run the following commands to set up default kubeconfig:
Install
helm v3.15.3
following the docsAdd cilium repo by running
helm repo add cilium https://helm.cilium.io/
Install
Cilium v1.16.0
by runninghelm install cilium cilium/cilium --version 1.16.0 --namespace kube-system
Make sure all pods (besides 1
cilium-operator
that will be Pending) are up and healthy.Check the
NO_PROXY
andno_proxy
environment variables in static pods andkube-proxy
DaemonSet. They should correspond to the values set in the third step.SSH into the worker node and set the proxy env variables running the commands below. We added worker node public and private IPs in the
NO_PROXY
andno_proxy
variables (replace<>
with valid values).Run
systemctl restart containerd
on the worker node.User
kubeadm join
to connect the worker node to the cluster.Check
NO_PROXY
andno_proxy
environment variables in the static pods andkube-proxy
DaemonSet. You won’t see the worker's public and private IP there, because these env variables weren’t updated.