kubernetes-sigs / cloud-provider-azure

Cloud provider for Azure
https://cloud-provider-azure.sigs.k8s.io/
Apache License 2.0
258 stars 272 forks source link

VMSS Instances are not getting added as part of the backendaddresspool #384

Closed aroramanish080282 closed 3 years ago

aroramanish080282 commented 4 years ago

Question: This setup is a self-managed/hosted K8s and CCM changes for Azure vendor support. Existing/New VMSS Instances are not getting added automatically as part of the Kubernetes_internal backendaddresspools.

Setup: VMSS + Kubernetes(Microk8s) + CCM Changes

Problem:

Warning SyncLoadBalancerFailed service/nginx-svc Error syncing load balancer: failed to ensure load balancer: ensure(default/nginx-svc): backendPoolID(/subscriptions//resourceGroups/amitestaug/providers/Microsoft.Network/loadBalancers/kubernetes-internal/backendAddressPools/kubernetes) - failed to ensure host in pool: "instance not found"”

Warning SyncLoadBalancerFailed 37s (x7 over 8m42s) service-controller Error syncing load balancer: failed to ensure load balancer: ensure(cwpp/cwppbroker): backendPoolID(/subscriptions//resourceGroups/amitestaug/providers/Microsoft.Network/loadBalancers/kubernetes-internal/backendAddressPools/kubernetes) - failed to ensure host in pool: "instance not found"

Idea of using VMSS is to scale up the cluster and hosted services running behind a load balancer. This is self-managed Kubernetes deployment and not AKS.

The Azure implementation of CCM maintained by Microsoft we referred is https://github.com/kubernetes-sigs/cloud-provider-azure which would be consumed by any CNCF certified distributions of K8s.

What happened:

New and Existing Instances of VMSS are not added automatically as part of the LB backendAddressPool/Kubernetes

What you expected to happen: New and Existing Instance of the VMSS should get added automatically as part of the LB backendAddressPool/Kubernetes

How to reproduce it: Create and configure VMSS and launch few instances + configure Kubernetes(Microk8s) + CCM Changes implemented

Anything else we need to know?:

Environment:

feiskyer commented 4 years ago

@aroramanish080282 I'm not familiar with Microk8s setup, but could you confirm whether NodeName is the same as hostname for all the instances?

aroramanish080282 commented 4 years ago

Yes, NodeName are same as the hostname for all the Instance.

MicroK8s is a CNCF certified upstream Kubernetes deployment that runs entirely on your workstation or edge device. Being a snap it runs all Kubernetes services natively (i.e. no virtual machines) while packing the entire set of libraries and binaries needed.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cloud-provider-azure/issues/384#issuecomment-777224885): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.