Closed bernielomax closed 1 year ago
It's been a while since this was first posted. Any folks have any feedback/opinions on this? I would like to get it merged into the official project to avoid forking.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What would you like to be added:
Update the Helm chart to support installing Kubefed on an AWS EKS cluster that uses a third-party CNI such as Weave.
Why is this needed:
When installing Kubefed
0.9.x
on an EKS cluster running Weave. The following error occurs:I believe this is caused by the fact that the EKS Kubernetes control plane nodes are totally managed by AWS, and are not capable of running Weave. Therefore the overlay network does not extend to the master nodes, which breaks communication between the control plane and the pods. According to Weave's official "installing on EKS" docs this is a known limitation.
I was able to create a workaround for the above limitation by performing the refactors listed below. I am hoping that a similar solution might make its way into the official project.
Note: I have hard-coded certain example values to help demonstrate the workaround. These should actually be set using Helm chart values.
Add the ability to set
hostNetwork: true
on the following resources:kubefed-controller
(Deployment)kubefed-admission-webhook
(Deployment)kubefed-xxx
(Job)Example:
charts/kubefed/charts/controllermanager/templates/deployments.yaml
Avoid Kubefed and Kubernetes port conflicts (i.e 443, 8080) when
hostNetwork
is enabled. This can be done by being able to specify the ports on the resources above.Example:
charts/kubefed/charts/controllermanager/templates/deployments.yaml
charts/kubefed/charts/controllermanager/templates/service.yaml
Disable the
webhook
metrics, and health endpoints. The controller runtime (sigs.k8s.io/controller-runtime/pkg/manager
) automatically binds its own metrics, and health check endpoints to common ports such as8080
. However these endpoints do not seem to be used (since they are not referenced in the Helm chart template for thewebhook
deployment). I believe they can be safely disabled.Example:
cmd/webhook/app/webhook.go:
Hopefully folks find this document useful, and an official solution might soon be available. 🤞 I am more than happy to contribute!
/kind feature