Closed feiskyer closed 2 years ago
/assign
IMHO, we should divide these feature into two parts: integration setup and PLS/PE lifecycle management. Because PLS can be shared between different services and event between difference resource instances. Thus, PLS should not be maintained in service object lifecycle. However, I think it's reasonable to create a PE object for the service object when appropriate PLS resources is specified because PE has one-to-one relationship with the internal load balancer which the service owns.
PLS has one to one relationship with internal lb and should be reconciled when svc is changed.
Considering the lifecycle management of PLS, should we introduce new resources in azure service operator to wrap up maintenance logic for PLS/PE?
no need.
Regarding integration setup, we can create/delete PE instance in svc reconciling loop.
@feiskyer Any advice is appreciated. Thanks!
No, PE is not involved here. The feature here only needs to create a new PLS for the service associated ILB frontend configurations.
And by the way, PLS couldn't be shared by multiple services (PLS is associated to ILB frontend configurations, and ILB frontend configuration is 1:1 map to k8s service).
Thanks for pointing out!
can we also somehow include the approval process into K8s? It would be nice to update the K8s object if there is a connection request(s), along with the required information to decide if the connection request can be approved (request message, source resource ID), and a way to approve/decline the request.
Also, should this feature be Azure only or can this be a generic extension that Azure, AWS, GCP.... can implement?
@MSSedusch PLS connection approval is a different topic. Both its API and approval process are different from k8s service objects. I think it's better for customer to use PLS API for that since PLS API is required anyway to create a PE.
I think the PE creation is outside of the scope of Kubernetes, I assume in this scenario, the person that creates the PE would not be the operator of the K8s cluster but rather a customer that uses a service that is hosted on K8s.
Approving the connection on PLS side updates the PLS resource and since K8s creates the PLS resource, it feels like a good idea to also approve the connection inside K8s. Otherwise, applications in K8s would have a dependency to an Azure REST API only to approve the connection.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/assign
Design document for adding annotations on service objects to enable Azure PLS integration: https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/
@jwtty is this already live now and can be tested on public AKS clusters?
@sebader not yet, the PR #1484 has just been merged into master branch, it would need some time for cherry-picking and updating into AKS.
@feiskyer no worries, I figured :) would you mind to please update this thread once it goes lives? Since the issue from the AKS repo also points to this one here
Hi @sebader, sure we will keep this thread updated! Thank you for checking.
It's really cool to see this happen. Thanks to anybody who was involved! By coincidence, Front Door Standard/Premium also became GA quite recently...
It was actually already posted on Azure Update yesterday but I assume it will take a couple of weeks until the new version of cloud-node-manager lands in an AKS release and then gets rolled out, right?
https://azure.microsoft.com/en-us/updates/public-preview-aks-private-link-service-integration/
Thanks guys for the job done.
I based my yaml according to this example. Context:
FYI @MartinForReal
@JamesDLD the changes are not rolled out yet to the live clusters
@JamesDLD the changes are not rolled out yet to the live clusters
Do you know if I can be updated on when this will be rolled out to live clusters?
It often takes two weeks after a new release is published. But it depends.
Hey, this feature has been rolled out to all prod regions. It's now in public preview, here is the official doc Please feel free to try it!
Hi All, Here is what, I have tried so for and results received against those. Case 1: A sample application- Azure Voting App. (No context path. Only the hostname). a) Annotated by service yaml with below annotation and spec Type as Load Balancer. service.beta.kubernetes.io/azure-load-balancer-internal: "true" service.beta.kubernetes.io/azure-pls-create: "true" service.beta.kubernetes.io/azure-pls-name: testPLS service.beta.kubernetes.io/azure-pls-ip-configuration-subnet: aks-subnet service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count: "1" service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address: 10.224.10.10 service.beta.kubernetes.io/azure-pls-proxy-protocol: "false" service.beta.kubernetes.io/azure-pls-visibility: "*" 2- Created the Azure Front Door(Premium one) and specified the Private Link URL as origin hostname. Enabled the private link service and the WAF Policy. 3- Approved the Private Link.
Case 2: My Actual Application (Various context path. hostname/path1, hostname/path2 and so on). a) Annotated by service yaml with below annotation and spec Type as Load Balancer. service.beta.kubernetes.io/azure-load-balancer-internal: "true" service.beta.kubernetes.io/azure-pls-create: "true" service.beta.kubernetes.io/azure-pls-name: testPLS service.beta.kubernetes.io/azure-pls-ip-configuration-subnet: aks-subnet service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count: "1" service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address: 10.224.10.10 service.beta.kubernetes.io/azure-pls-proxy-protocol: "false" service.beta.kubernetes.io/azure-pls-visibility: "*" 2- Created the Azure Front Door(Premium one) and specified the Private Link URL as origin hostname. Enabled the private link service and the WAF Policy. 3- Approved the Private Link.
Just published an article about it! https://medium.com/microsoftazure/connect-azure-front-door-premium-to-an-aks-app-origin-with-private-link-5978341c2650 Thanks for this feature! Any idea when this will be GA?
Just published an article about it! https://medium.com/microsoftazure/connect-azure-front-door-premium-to-an-aks-app-origin-with-private-link-5978341c2650 Thanks for this feature! Any idea when this will be GA?
This voting app does work. There is no subsequent path attached to hostname. Did you try for another application with a /path? For me, it does not work.
@gitakp I think you issue is not really related to this original issue here (which is for tracking Private Link on AKS, which does work).
Anyway, I built a (fairly complex but complete) scenario:
Installing nginx ingress, which is exposed via PLS: https://github.com/Azure/Mission-Critical-Connected/blob/feature/afd-privatelink/.ado/pipelines/templates/jobs-configuration.yaml#L46
Ingress configuration, including path: https://github.com/Azure/Mission-Critical-Connected/blob/feature/afd-privatelink/src/app/charts/catalogservice/templates/ingress.yaml
Front Door config (in Terraform, not yet GA) https://github.com/Azure/Mission-Critical-Connected/blob/feature/afd-privatelink/src/infra/workload/globalresources/frontdoorv2.tf#L161
and the route: https://github.com/Azure/Mission-Critical-Connected/blob/feature/afd-privatelink/src/infra/workload/globalresources/frontdoorv2.tf#L188
Hi @gitakp @JamesDLD , the current estimate GA Date is 12/1/2022.
What would you like to be added:
Add support for Private Link Service when exposing service as LoadBalancer with a new annotation
service.beta.kubernetes.io/azure-private-link-service: enabled
Without this, customers have to perform the following steps manually:
step 1): ILB service creation: https://docs.microsoft.com/en-us/azure/aks/internal-lb#create-an-internal-load-balancer step 2): find the ILB frontend configuration ID from above step and then follow https://docs.microsoft.com/en-us/azure/private-link/create-private-link-service-cli#create-a-private-link-service to create PLS.
Why is this needed:
Refer https://github.com/Azure/AKS/issues/1604.