k3s-io / helm-controller

Apache License 2.0
378 stars 84 forks source link

HTTP_PROXY proxy settings #252

Open aleksasiriski opened 1 day ago

aleksasiriski commented 1 day ago

Is it possible to configure proxy settings? I am using k3s and possibly rke2 in future, but I don't want to set HTTP_PROXY but instead use CONTAINERD_HTTP_PROXY and that works for pulling images, but this then doesn't propagate to helm install jobs and I can't seem to find how to set this up.

brandond commented 23 hours ago

Any HTTP_PROXY related env vars set in the helm controller's environment will be passed into the helm job pods. For k3s and rke2, this means you must set that env var if you want the helm controller's job pods to use the proxy.

Why don't you want to set HTTP_PROXY?

aleksasiriski commented 22 hours ago

Why don't you want to set HTTP_PROXY?

It's not very well documented what it affects, since I only need it for pulling images and helm install pods I managed to workaround this by using CiliumEgressGatewayPolicy with label keys for helm install pods.

Could you send me a link on what is being affected by setting HTTP_PROXY instead of CONTAINERD_PROXY if that docs exist?

brandond commented 13 hours ago

what is being affected by setting HTTP_PROXY instead of CONTAINERD_PROXY

Everything, just as you'd expect. Kubelet, apiserver, scheduler, controller-manager, etcd, all the things that run in the main K3s process. Pods are not of course, because they have their own environments.

aleksasiriski commented 13 hours ago

what is being affected by setting HTTP_PROXY instead of CONTAINERD_PROXY

Everything, just as you'd expect. Kubelet, apiserver, scheduler, controller-manager, etcd, all the things that run in the main K3s process. Pods are not of course, because they have their own environments.

Yep, that's what I thought. I don't want my kubeapi and everything all to be at the mercy of proxy's uptime.

So, in the end, I would need to deploy my own Helm controller in order to give it proper HTTP_PROXY env vars, there's no way around it?

brandond commented 12 hours ago

The apiserver, controller-manager, and such don't actually go out to the internet for anything, and the cluster CIDRs and cluster domain are all automatically added to the NO_PROXY list (as your proxy is unlikely to have access to things running inside the cluster), so I doubt you'd actually run into any problems simply setting the HTTP_PROXY env var. You might want to add your node LAN CIDR and internal DNS zone to the NO_PROXY list as well, just to be sure.

Have you actually tried it or are you just expecting there would be problems?

aleksasiriski commented 12 hours ago

The apiserver, controller-manager, and such don't actually go out to the internet for anything, and the cluster CIDRs and cluster domain are all automatically added to the NO_PROXY list (as your proxy is unlikely to have access to things running inside the cluster), so I doubt you'd actually run into any problems simply setting the HTTP_PROXY env var. You might want to add your node LAN CIDR and internal DNS zone to the NO_PROXY list as well, just to be sure.

Oh that's good then.

Have you actually tried it or are you just expecting there would be problems?

I had problems that my proxy is deployed inside the cluster and using NodePort service with localhost:<port> and that doesn't work within the helm install container, but is required for containerd for example.

brandond commented 12 hours ago

I'm confused what the value proposition would be there - why make your nodes go through something running in the cluster, to get out to the internet?

aleksasiriski commented 12 hours ago

I'm confused what the value proposition would be there - why make your nodes go through something running in the cluster, to get out to the internet?

Because I have to apply some filtering, to limit what can be download by commands like curl / helm install for example. I agree that it's an unconventional situation, but the idea was to avoid having yet another set of servers just for the proxy if possible.