vexxhost / atmosphere

Simple & easy private cloud platform featuring VMs, Kubernetes & bare-metal
79 stars 19 forks source link

Add support for HTTP proxy during deployment #1348

Open gtirloni opened 2 weeks ago

gtirloni commented 2 weeks ago

When deploying in a airgapped networks that requires the use of a HTTP proxy, the deployment fails because various tools expect the proxy configuration to be defined differently (e.g. through https_proxy env vars or config files or parameters).

Additionally, configuring containerd to always use a proxy creates other issues (like having to keep the NO_PROXY env var always updated whenever there are changes, which makes troubleshooting harder). The same situation happens when the target hosts are all locally configured to use a proxy.

Atmosphere's use of import_playbook also causes some issues because it does not support defining environment variables to the imported playbooks so they don't reach the target hosts and the tools/modules end up trying to connect directly to the external network.

mpiscaer commented 2 weeks ago

@gtirloni Would this https://github.com/vexxhost/atmosphere/pull/579 be solution, we only use internet for Authentication and Monitoring all else come from internally.

gtirloni commented 2 weeks ago

@mpiscaer I think that's a good start but there are many places that this touches. Basically all helm values need some kind of custom value to point it to the new registry with authentication (common in more restricted environments).

Yes, I think it's a good start. I was confusing this issue with another one about authenticated container registries.

mnaser commented 2 weeks ago

The step one to solving this was this commit:

https://github.com/vexxhost/ansible-collection-containers/commit/1480d52c9a05afda7e772ae48e457e6dae686cae

The step two is documenting the necessary variable, in 99.9999% of the cases, it will be:

http_proxy: "http://foobar:3128"
https_proxy: "http://foobar:3128"
no_proxy: "localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16,.svc,.cluster.local"

With this, I believe that covers almost all scenarios unless you've got some wild stuff running your cluster on non private IP space WITH an HTTP proxy...