Open petrkr opened 1 day ago
As workaround I have to add this ClusterRole and it's binding.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: kube-system
name: nodeTempClusterSubnetByHand
rules:
- apiGroups: ["acn.azure.com"]
resources: ["clustersubnetstates"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nodeTempClusterSubnetByHandRoleBinding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: azure-cns
namespace: kube-system
roleRef:
kind: ClusterRole
name: nodeTempClusterSubnetByHand
apiGroup: rbac.authorization.k8s.io
---
@petrkr you can delete the CRD to mitigate this
kubectl delete crd -n kube-system clustersubnetstates
CNS will log a slightly different error about the CRD not being found, but that one is benign and it will operate normally.
This has been fixed in https://github.com/Azure/azure-container-networking/pull/3029 and the latest CNS 1.6.13 is rolling out to AKS imminently.
Seems new azure-cns missing some roles/permissions. After update to Kubernetes 1.30.4 CNS is unable to authorize against API
As result there can not be assigned new IP address to PODs which causes this error
Maybe role binding is missing in https://github.com/Azure/azure-container-networking/blob/master/cns/azure-cns.yaml ?
As results is stuck cluster.