Closed petomalina closed 2 years ago
Thank you for filing this issue.
The main concern here is that multiple NEG controllers on different clusters would fight over the same NEG. They each will in turn wipe out other's endpoints and add their own ones into the NEG. We have brainstormed a few ideas to implement this feature. One way is to utilize the per endpoint metadata in the NEG to store which cluster the endpoint belongs to. This way multiple NEG controller can effectively manage its own endpoint and would not touch others. This also allows merging endpoints from multiple clusters into one NEG.
One more caveat is GC. If useExisting
annotation is specified, NEG controller would not conduct any NEG GC. However, user can remove this annotation and trick NEG controller into doing the GC.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
Just re-opening this because this is still a problem for me and just wanted a general opinion on a few things.
It would be pretty awesome to have this work as the issue describes, or at least a discussion on workaround that is "neat".
At the moment I create the service
within Kubernetes and the backend
services within GCP outside of the scope of Kubernetes (configuration as code such as terraform for example). But because of the asynchronous nature of creation it does get a bit messy when knowing or even trying to associate the neg that was created this way.
Valid implications on NEGs fighting each other too @freehan as you have mentioned. Doesn't sound like an "easy" problem anyway.
Would you not get the "fighting" behavior if you had 2 separate services with the same NEG annotation or does the controller prioritize the annotation that was there "first"? Because if that were the case it sounds like we'd have the same behavior whether we provided one or it was created ahead of time and it's name passed in.
/reopen /remove-lifecycle rotten
@lucasteligioridis: You can't reopen an issue/PR unless you authored it or you are a collaborator.
This issue connects to #919. Since we have the ability to choose the name it would make things simpler to have the ability to attach to an existing NEG.
The behavior could be explicit, like another key next to the
name
calleduseExisting
, that would not create a NEG but rather attach to an existing one.This would simplify the creation of NEGs with Terraform, where we could build the whole infra in a single go instead of deploying pods just to connect NEGs to LBs.