Open rlevchenko opened 4 years ago
Any thoughts? Since we have this guide , would it be the problem to allow such migration in terms of the AKS?
Action required from @Azure/aks-pm
Action required from @Azure/aks-pm
@palma21
Action required from @Azure/aks-pm
Ack on this ask, it is in a discussion on the AKS product team to address this pain point in migrating from BLB to SLB.
Re-opening so we may track on the backlog and provide updates. Let me know if you'd rather us create a new one
Until this is developed, can y'all have the networking team at MS stop squeezing the capabilities of the basic load balancer down? It's punitive to earlier adopters and is causing the entire platform to be viewed as not stable.
@palma21 Do we have any update if this under development or not? We created an AKS earlier this year and did not get a prompt for basic vs standard so it defaulted to standard. We successfully created another nodepool that we've used and it worked as expected. However, we are now unable to deploy a new node pool due to basic load balancer. Is it possible to get an ETA on this feature?
"Azure public IP addresses now support the ability to be upgraded from Basic to Standard SKU. Additionally, any Basic Public Load Balancer can now be upgraded to a Standard Public Load Balancer, while retaining the same public IP address. This is supported via PowerShell, CLI, templates, and API and available across all Azure regions"
Is it also supported by AKS?
Much needed for long-lived AKS that holds a lot of barely movable state.
Looking forward for any update on this as well. :)
@palma21 any ETA on the migration
@palma21 any update on the migration?
This would be great.
Looking forward for this as well. Would be really nice!
+1
+1
+1!
I'm in a weird catch 22 involving somehow having two node pools and a BLB. I can't add a node pool because I've got BLB, so my memory-heavy workloads are forced to run on non-optimized instance types.
+1
Multiple pools would allow us to better designate targets in Azure DevOps pipelines.
@palma21 Any updates on the progress on this issue?
Meanwhile , upgrade script for a standalone LB SKU is GA - https://azure.microsoft.com/en-us/updates/load-balancer-sku-upgrade-now-available-through-powershell-script-2/ I know, it's not the same and we need some actions from AKS team anyway.
+1
+1
+1
+1
+1
+1
+1
Any updates on this? I used the script and now LB is standard, but AKS still thinks that it is Basic. Is there a way to "tell" to AKS that LB is Standard and we can add more nodepools?
@cthulhu Does that mean you still see the banner "This cluster is using a basic load balancer. Having multiple node pools is not supported." in AKS > Nodepools, despite the loadbalancer being now SKU: Standard ?
Have you tried adding new nodepools to AKS using the CLI (az aks nodepool add) ? Do you get any error message ?
@palma21 Any update on this issue?
Yes, the banner on the cluster level is saying "This cluster is using a basic load balancer. Having multiple node pools is not supported." Also via CLI I'm getting the same error: "(SLBRequiredForMultipleAgentPools) Basic load balancers are not supported with multiple node pools. Create a cluster with standard load balancer selected to use multiple node pools, learn more at aka.ms/aks/nodepools."
I have a similar state as @cthulhu. I tried upgrading my cluster with Start-AzBasicLoadBalancerUpgrade -ResourceGroupName management-cluster-nodes -BasicLoadBalancerName kubernetes
which results in a load balancer with standard sku
> Get-AzLoadBalancer -ResourceGroupName management-cluster-nodes
ResourceGroupName Name Location ProvisioningState Sku Name
----------------- ---- -------- ----------------- --------
management-cluster-nodes kubernetes westeurope Succeeded Standard
but the network profile of the cluster says I still use basic:
> az aks show --resource-group management-cluster --name management-cluster
{
"aadProfile": null,
"addonProfiles": null,
"agentPoolProfiles": [
{
...
"type": "VirtualMachineScaleSets",
"upgradeSettings": {
"maxSurge": null
},
"vmSize": "Standard_B2s",
"workloadRuntime": null
}
],
...
"name": "management-cluster",
"networkProfile": {
"dnsServiceIp": "10.0.0.10",
"dockerBridgeCidr": "172.17.0.1/16",
"ipFamilies": [
"IPv4"
],
"loadBalancerProfile": null,
"loadBalancerSku": "Basic",
"natGatewayProfile": null,
"networkMode": null,
"networkPlugin": "kubenet",
"networkPluginMode": null,
"networkPolicy": null,
"outboundType": "loadBalancer",
"podCidr": "10.244.0.0/16",
"podCidrs": [
"10.244.0.0/16"
],
"serviceCidr": "10.0.0.0/16",
"serviceCidrs": [
"10.0.0.0/16"
]
},
"nodeResourceGroup": "management-cluster-nodes",
"oidcIssuerProfile": {
"enabled": false,
"issuerUrl": null
},
"podIdentityProfile": null,
"powerState": {
"code": "Running"
},
"privateFqdn": null,
"privateLinkResources": null,
"provisioningState": "Succeeded",
"publicNetworkAccess": "Enabled",
"resourceGroup": "management-cluster",
"securityProfile": {
"azureKeyVaultKms": null,
"defender": null,
"workloadIdentity": null
},
"servicePrincipalProfile": {
"clientId": "msi"
},
"sku": {
"name": "Basic",
"tier": "Free"
},
...
}
In summary, the feature was added to the backlog almost 3 years ago; no activities since then (based on the backlog status) Any updates? ETA? It'd be interesting to read about tech. issues you faced with, if any. Thanks.
We ended up with cluster recreation. Create a new cluster with -v1 suffix, migrate workloads, test, recreate original cluster, migrate workload, test.
Any news here? We would really appreciate this
Azure has started sending retirement warnings via email "Action required: Upgrade from Basic to Standard SKU public IP addresses in Azure by 30 September 2025"
This has not been resolved for almost 4 years now. So what are our options, other than recreating our clusters?
We are a big company here that needs to transition to the standard LB before September 30 2025 like everyone else. We recently tried to migrate our test cluster and to our big surprise discovered it cannot be done without recreating the cluster as people here has done.
Difference here is that we are a big global company that cannot allow for the downtime a recreation of the cluster will introduce.
We have decided to postpone the task of transitioning from basic LB to standard LB another 6 months and we sincerely expect you to come up with at least some clarification on what to expect. Are you going to provide us with a painless migration approach or are we to prepare our customers for the inevitable downtime ?
@thpou if that helps at all... I've built a 2nd Cluster in parallel, run the same App(s) there as form of HA. Then I flipped the FrontDoor backend from old Cluster to new Cluster without customers noticing. Same would work with App GW.
+1
Any progress? Hit this today as well...
Any progress? We need to replace this on several AKS
[request] There are some features that rely on Standard LB only and aren't allowed on cluster with Basic Load Balancers. For example, we can't configure authorized IP ranges to limit access to the API server for now. If we don't have techinical limits, can you allow us to move from Basic LB to Standard LB without re-creating an entire cluster? Any "hidden" or even "unoffical" walkarounds?