Closed Michael-Sinz closed 2 years ago
Note that I would really think that the feature in azure is the way to go long term. It is a feature that would not be kubernetes specific but would address all deployments within a specific subscription and help simplify/coordinate/manage the service tag requirements for teams.
Until Azure chooses to provide such a feature, this service tag IP address behavior will fall as a responsibility onto teams building kubernetes clusters, including aks-engine IP address allocation and all of the IP address allocation done on their behalf when deploying additional services into a cluster. If this were solved at the kubernetes/aks-engine level, it would significantly improve the value of using kubernetes to deploy services that need to be within service tag constraints.
(Yes, we have been manually managing this - well, automatically, via another level of code, but manually as far as kubernetes is concerned - and have been bit by services that got deployed that forgot to do this "manual" method in production)
@paulgmiller does AKS have such a solution at present?
cc @devigned @craiglpeters
@feiskyer @andyzhangx does the azure cloudprovider support allocating service IP addresses from a service tag set of IP addresses?
does the azure cloudprovider support allocating service IP addresses from a service tag set of IP addresses?
No, this is not supported. per my understanding, service tags is used for NSG, instead of allocating IPs from a range. What I'm missing here?
We get a set of IP addresses (per region) allocated to our service tag. The reason is that IP address allocation into service tags is time consuming and requires that the IP address be restricted from being used elsewhere until it is removed from the service tag globally (which can be some time given the TTL for external uses of service tags to make private link work)
The pool of IP addresses does, however, need to be managed by something such that we know which ones have been used somewhere and which have not. This management has been left to each team to manage on their own rather than providing a standard mechanism by which this is managed. What has to happen today (from the kubernetes perspective) is that deploying a service is done with a static IP address that is usable by that service but someone/some thing outside of kubernetes and the cloud controller needs to do that work and pick the static IP address.
What I am requesting is that this type of feature be handled in a standard way such that all customers can benefit from a single implementation of such a behavior rather than requiring everyone to find solutions on their own.
This is especially troublesome if you use an "off the shelf" helm chart or application deployment controller that does not have a way to provide the static IP as an input since that now will do the normal kubernetes service deployment with public IP and will allocate dynamically from outside of the service tag.
If there was a way to have that managed at the cluster level, at least off-the-shelf deployments remain kubernetes like services and it is just the allocation of the IP address that happens, within kubernetes or the cloud controller (or, even better, within Azure due to an Azure feature) to be one of allocation within the service tag allocated IP addresses rather than general public IP addresses.
I would prefer this to be an azure feature but at least having this in kubernetes would help make deployment within kubernetes less troublesome and more "kubernetes idiomatic"
I think think Azure ML wanted somthing like this as they managed egress ips by hand using the below annotations
// ServiceAnnotationPIPName specifies the pip that will be applied to load balancer ServiceAnnotationPIPName = "service.beta.kubernetes.io/azure-pip-name"
Could do annotaton like this
service.beta.kubernetes.io/azure-pip-servicetag : mywonderfulpool
But then a mutating webhook would have apply it if you don't control charts.
Checkign with azureml to see if this is actually somethign they still want too or if I'm misremembering.
We're not dying for this and I see a set of new problems that would mean we might not want to adopt it at all (or would at least need to fund some moderately complex changes to pull it off).
If we want this kind of change we're going to need to reckon with the loss in transparency of what IPs are being used how. Today we have specific/exact knowledge of which IPs in our tagged pool are ingress IPs, what they're ingress for (requisite for rational DNS and traffic manager configuration), and which IP resources are egress-only. Because we would lose direct visibility/control here we would have to come up with a new way of configuring all of the above 'after the fact' by determining what IP from the pool was taken to be a particular ingress. It's also somewhat concerning that this configuration could, theoretically, change in ways we don't currently expect it to.
So after thinking more about this ... the idea of an admission controller or something is actually quite interesting. We could write hooks against it to deal with the fallout from IP assignments from the pool (creating DNS records / TM endpoints / etc). I don't know how you would make such a service sufficiently generic. We would also need this service to exist outside our k8s clusters as, when we stand them up, one of the first things we do prior to the cluster really being properly alive/online is to give that cluster egress IP addresses for NAT purposes...
Discussed with @Michael-Sinz and the CAPZ team.
Where the discussion landed: rather than being a cluster API level task, we should look into this at a cloud provider level. When creating a new service, the IP must be inside the service tag.
Closing this issue in favor of opening a new one in the cloud provider repo (https://github.com/kubernetes-sigs/cloud-provider-azure/issues/1246).
A way to ensure that all IP addresses allocated for a cluster are allocated from our service tag
Explain why AKS Engine needs it We need to be able to tell customers that traffic to our service and/or traffic from our service will always come from a servicetag (set of IP addresses) such that they can set up network rules to limit or allow traffic to our services.
We have been doing this the "hard way" by making sure k8s services are deployed with static IP addresses and doing the allocation management externally, but that only works as long as the services all play along. Having one service in the cluster that is deployed with an off-the-shelf controller or helm-chart will allocate another "random" IP address and thus break the assurances of the inbound (and, more importantly, outbound) traffic only going through the service tags.
Using dedicated output IP addresses would simplify this somewhat but it is non-trivial in costs and still requires that inbound services be fully under our control as to IP address allocation.
Describe the solution you'd like It would be great if there was a way to route all IP address allocation (and deallocation) through our own code or through some azure service tag allocation for the cluster/region. (Yes, there are complications here)
Describe alternatives you've considered We have considered coming up with an admission controller that catches services that do not have static IP addresses and converting them to the allocation mechanism via the service tag and then being static IP addresses but we have not gotten that done and have issues with respect to the cleanup of allocations in a dynamic cluster.
We also think it would be a great azure feature to be able to constrain requests for IP addresses from within a resource group (or subscription, if needed) to only come from that resource (or subscription's) IP address pool/service tag. That would remove the concern of having anything change in kubernetes cloud controller as this would be a core behavior in azure. (This is actually my preferred solution, albeit very far from under our control)
Additional context I think this will become more generally wanted as more and more systems/companies/customers do "virtual vnet" type solutions with explicit white-listed service tags for both in-bound and out-bound traffic. This is just additional defense in depth since it does not change the need for TLS or auth - it just adds one more factor that must be met.