Open hanlins opened 3 years ago
For such scenario we would also want the ability to configure https_proxy and no_proxy.
We'd need to flesh out details here, define and agree on what an air gapped env is and what scenarios and behaviour exactly we want to support end to end, e.g would this be a one shot thing? or would we want capi components to watch a "proxy config" and react to changes there? I think this will probably deserve a proposal having all the details.
@hanlins I'm starting to think about this use case, and my main concern is that adding proxy settings can't be achieved by simple variable substitution, which is the only templating solution supported in clusterctl as of today. The only two options I can see here are:
Also, the ongoing work on ManagedCluster might provide some help here, but this is still TBD IF this can help, I'm happy to chat about this
/milestone Next
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
/reopen
We just encountered a customer that needs this, too.
It could be done through templating in cmd/clusterctl/client/repository.NewComponents with an option that contains the values for https_proxy, http_proxy, and no_proxy.
@joejulian: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/reopen
@dlipovetsky: Reopened this issue.
/lifecycle frozen
/assign @ykakarap Can you please assess if it would be possible to extend clusterctl to inject http proxy env vars in the provider manifests.
/milestone v1.2
Hey I left a message on the #cluster-api slack channel to no avail :( Is it possible to get involved with the effort here? What's the criteria that we're going to be using to asses if this is possible or not? I'd love to see this feature happen so please let me know where I can help.
Catching up on the issue. Will get back soon. :)
@faiq I will take a look at this and post my findings here.
/triage accepted /unassign @ykakarap
@joejulian could you share how you fixed this problem as per https://github.com/kubernetes-sigs/cluster-api/issues/4585#issuecomment-1027310851 so someone can pick up the work in CAPI /help
@fabriziopandini: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
@fabriziopandini we modify the core-components.yaml file with this kustomization
overlay
apiVersion: apps/v1
kind: Deployment
metadata:
name: NA
spec:
template:
spec:
containers:
- name: manager
env:
- name: HTTP_PROXY
value: ${HTTP_PROXY:=""}
- name: HTTPS_PROXY
value: ${HTTPS_PROXY:=""}
- name: NO_PROXY
value: ${NO_PROXY:=""}
Sounds like we have at least 3 options, in order from "least work required" to "most work required" from our users:
(In every cases, users need to include information like the Pods and Services CIDRs in the NO_PROXY
variable, along with fixed values like localhost, etc.)
@fabriziopandini I don't remember what we did (and I don't work there anymore so I can't go back and check).
Sounds like we have at least 3 options, in order from "least work required" to "most work required" from our users:
1. Include these env variables in the manifest for the core provider. 2. Document how to add these env variables by patching the manifest, e.g. with kustomize. 3. Document how to use a mutating webhook to set these env variables.
(In every cases, users need to include information like the Pods and Services CIDRs in the
NO_PROXY
variable, along with fixed values like localhost, etc.)
I think it's obvious I support 1. :)
I agree that adding env var to the manifest is the simplest way forward, my only concern is that in the past we got push-back for this type of change by folks using git-ops and trying to use yaml files directly (and in fact there is https://github.com/kubernetes-sigs/cluster-api/issues/3881 asking to remove all the variables we currently have).
I've never been a fan of adding the complexity of templating to cluster-api a la ClusterClass, but the community felt the return was worth it. Embracing that change; I'm not sure, now, where the distinction lies between one form of templating and another. Is there a way to solve this that's more in line with ClusterClass, maybe?
Q: 1. Include these env variables in the manifest for the core provider.
In air-gapped environment, cluster API provider pods might be deployed in air-gapped environment, and thus cannot talk to the infrastructure provider directly.
Just for my understanding. For which connections do we need the http proxy configuration?
I'm just a bit confused because the original ask was for the infra provider, but core CAPI is not accessing it. And having it consistently in infra providers would require agreement with infra providers (maybe an addition to the contract)
4. communication from workload clusters to endpoints (registry, internet, ...)
Should be probably from controllers / mgmt cluster to registry/internet?
I think the issue is about setting proxy for CAPI providers/controllers only (based on the PR description).
But based on the title it could be proxy support in general.
I don't think you can add generalized proxy support. There's no standard.
Sounds like we have at least 3 options, in order from "least work required" to "most work required" from our users: Include these env variables in the manifest for the core provider. Document how to add these env variables by patching the manifest, e.g. with kustomize. Document how to use a mutating webhook to set these env variables. (In every cases, users need to include information like the Pods and Services CIDRs in the NO_PROXY variable, along with fixed values like localhost, etc.)
Agreed, at minimum we could provide some guidance docs
/kind documentation
/priority backlog
User Story
As an operator, I would like to add proxy setting configurations to capi providers for the air-gapped environments.
Detailed Description
In air-gapped environment, cluster API provider pods might be deployed in air-gapped environment, and thus cannot talk to the infrastructure provider directly. In this scenario, a proxy server is typically deployed to enable the connectivity and audit the traffic that bypasses the firewall. It would be ideal if we can have a mechanism to plumb the proxy server configurations to the cluster API provider pods, so that they can be able to communicate with the infrastructure.
Anything else you would like to add: One approach I think think of is to have something like this:
The implementation should be similar to https://github.com/kubernetes/kubernetes/pull/84559.
[Miscellaneous information that will assist in solving the issue.]
/kind feature