kubernetes-sigs / aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers
https://kubernetes-sigs.github.io/aws-load-balancer-controller/
Apache License 2.0
3.93k stars 1.46k forks source link

Ingress gets deployed but target group don't have any target #2938

Closed surajrathoresp closed 1 year ago

surajrathoresp commented 1 year ago

Hi team,

i deployed game-2048 yaml mentioned in your example config.

I have added below two lines in this yaml to work on AWS. alb.ingress.kubernetes.io/subnets: subnet-14d98d63,subnet-0660433c632558877 alb.ingress.kubernetes.io/target-node-labels: worker=true

Environment

Problem Statement - all resources created as expected including ALB/TG/SG etc but there is no target attached in target group.

Getting below error in logs kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name=aws-load-balancer-controller|grep error {"level":"error","ts":1671609090.8563244,"logger":"controller.targetGroupBinding","msg":"Reconciler error","reconciler group":"elbv2.k8s.aws","reconciler kind":"TargetGroupBinding","name":"k8s-game2048-service2-3546c1c632","namespace":"game-2048","error":"providerID is not specified for node: k8w1"}

exact error - providerID is not specified for node

I also tried with changing target-type but getting same error. alb.ingress.kubernetes.io/target-type: instance

Additional Context:

ubuntu@k8m:~$ kubectl get pods,deployment,svc,ing -n game-2048
NAME                                  READY   STATUS    RESTARTS   AGE
pod/deployment-2048-6bbb7c996-429x6   1/1     Running   0          61m
pod/deployment-2048-6bbb7c996-jhjj9   1/1     Running   0          61m
pod/deployment-2048-6bbb7c996-mjjdr   1/1     Running   0          61m
pod/deployment-2048-6bbb7c996-mkn2b   1/1     Running   0          61m
pod/deployment-2048-6bbb7c996-mxb2b   1/1     Running   0          61m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deployment-2048   5/5     5            5           61m

NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/service-2048   NodePort   10.103.143.92   <none>        80:30843/TCP   61m

NAME                                     CLASS   HOSTS   ADDRESS                                                                  PORTS   AGE
ingress.networking.k8s.io/ingress-2048   alb     *       k8s-game2048-ingress2-330cc1efad-501799701.us-west-2.elb.amazonaws.com   80      61m
ubuntu@k8m:~$ 
ubuntu@k8m:~$ kubectl get nodes -o wide
NAME   STATUS   ROLES           AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
k8m    Ready    control-plane   7d1h   v1.26.0   10.0.2.53     <none>        Ubuntu 22.04.1 LTS   5.15.0-1026-aws   containerd://1.6.14
k8w1   Ready    <none>          106m   v1.26.0   10.0.2.197    <none>        Ubuntu 22.04.1 LTS   5.15.0-1026-aws   containerd://1.6.14
k8w2   Ready    <none>          111m   v1.26.0   10.0.2.73     <none>        Ubuntu 22.04.1 LTS   5.15.0-1026-aws   containerd://1.6.14
ubuntu@k8m:~$ cat /etc/issue
Ubuntu 22.04.1 LTS 

How to fix this error to add targets in target group

kishorj commented 1 year ago

@surajrathoresp, could you verify ccm is setup and running as expected on your cluster? For further details, please refer to the AWS cloud provider guide https://github.com/kubernetes/cloud-provider-aws/blob/master/docs/getting_started.md

surajrathoresp commented 1 year ago

@kishorj any document for k8 beginners on ccm?

as i am unable to do below steps mentioned in doc.

Temporarily stop the kube controller managers from running. This might be done by temporarily moving manifests out of kubelet's staticPodPath (or --pod-manifest-path), or scaling down the kube controller manager deployment, or using systemctl stop if they are managed by systemd. Add the --cloud-provider=external to the kube-controller-manager config. Add the --cloud-provider=external to the kube apiserver config. Add the --cloud-provider=external to each the kubelet's config. Add the tag kubernetes.io/cluster/your_cluster_id=owned (if resources are owned and managed by the cluster) or kubernetes.io/cluster/your_cluster_id=shared (if resources are shared between clusters, and should not be destroyed if the cluster is destroyed) to your instances. Apply the kustomize configuration: kubectl apply -k 'github.com/kubernetes/cloud-provider-aws/manifests/base/?ref=master' or run the cloud cloud controller manager in some alternative way.

kishorj commented 1 year ago

The live docs for cloud-privider-aws is at the link https://cloud-provider-aws.sigs.k8s.io/. Which k8s distro do you use? For example: EKS, kops.

kishorj commented 1 year ago

@surajrathoresp, I'm closing the issue. If you have further concerns, you can reopen or create a new issue.

huanggze commented 1 year ago

I am facing the same problem with my self-managed, non-eks cluster...

huanggze commented 1 year ago

I've solved this problem by manually adding EC2 instanceid to the spec.providerID field of each nodes