Open mumoshu opened 6 years ago
Hi, Interesting use case!
How do you intend to run the ingress-controller? A Single controller in ONE cluster owning the ALB, or TWO controllers, one in each cluster sharing the ALB? I'm guessing the latter because you want the controller to attach two different target groups.
We currently have the tag kubernetes.io/cluster/<cluster-id>=owned
set on the ALB stack. Maybe we could just make it possible to specify another tag like this manually for a another cluster, and have the controller only delete the ALB if it's the only one 'owning' it?
@mikkeloscar Hi, thanks for the response!
Regarding the controller deployment - yes, you are correct. I'm currently assuming to have TWO controllers, one per cluster sharing the same ALB. While migrating between clusters, I'd like to "mirror" as many k8s objects, including ingresses and ingress controllers, as possible to reduce operational burden due to fine-grained control. For me, "Just mirror everything in one cluster to another to start migrating" seemed simplest to work.
Regarding the tagging, I'd suggest:
kubernetes.io/cluster/<cluster-id>=shared
instead of owned
according to the aws provider so that it is clear at least that unnecessity in single cluster doesn't destroy an ALB and that it can be used safely from the another controller in an another cluster.zalando.org/kube-ingress-aws-controller/<cluster-id>
could be set to used
or unused
. Given this, any controller in any cluster could safely destroy the shared ALB when and only when all the zalando.org/kube-ingress-aws-controller/<cluster-id>
tags are set to unused
for all the cluster-id
s.WDYT?
@mumoshu sound good to me and not really complicated to do.
Do you want to try to create a PR for this?
@szuecs Yes, I'm willing to!
@mumoshu great, looking forward to the PR! :)
Hi, thanks a lot for sharing the great project š
I'd like to gradually migrate user workloads from one k8s cluster to another. It can theoretically be achieved by having an user-facing ALB associated with target groups from multiple k8s clusters.
How it relates with this great project - kube-ingress-aws-controller? I'd like to rely on kube-ingress-aws-controller anyway for automating operations around exposing many services developed across my org. Given the context, I imagined that if it allowed to share an ALB across multiple clusters, so that our devs can expose services easily while ops has a standard way to gradually migrate user workloads without any downtime while switching the cluster in a blue-green manner. Such cluster migration task would be necessary due to a potentially dangerous upgrade affecting the whole cluster e.g. upgrading the provisoning tool, a service mesh, etc.
AFAIK, kube-ingerss-aws-controller deletes ALBs when no corresponding ingresses are found in single cluster.
Ref: https://github.com/zalando-incubator/kube-ingress-aws-controller/blob/1a6295d3de5dab4fb75b616af2b81fca8f18d65c/worker.go#L43
For instance, would it be possible to add tags like "associated-to/" + "unneeded-by/" so that we can be explicit about whether all the relevant clusters are assuming an ALB unneeded or not?