scholzj / terraform-aws-kubernetes

Terraform module for Kubernetes setup on AWS
Apache License 2.0
202 stars 129 forks source link

LoadBalancer Service stuck in pending #39

Open ltsar-federated opened 1 year ago

ltsar-federated commented 1 year ago

Nice module.

I tried creating a LoadBalancer service to expose the cluster through and ALB loadbalancer, but it stays in pending state. Do you have any recommendations for exposing services publicly using this module?

It says Ingress is supported (appears to be nginx-ingress controller) - However I would like to keep my nodes private and use an external loadbalancer to route traffic.

I would love to hear your thoughts on how to expose services publicly. The docs are a bit sparse.

Thank you, Liam

scholzj commented 1 year ago

I use normally classic load balancers. those seemed to work fine last time I tried them.

ltsar-federated commented 1 year ago

Could you please give me perhaps a little bit more detail how you use classic load balancers? I have a working cluster, with private nodes. I create a loadbalancer resource and... nothing happens. I deployed using this configuration. I see that in order for the aws-cloud-provider addon to work, I need to fix this issue -

erated-kubernetes-staging-master/i-02f5edd2d47c0aefa is not authorized to perform: iam:CreateServiceLinkedRole on resource: arn:aws:iam::794626499202:role/aws-service-role/elasticloadb
alancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing because no identity-based policy allows the iam:CreateServiceLinkedRole action
module "kubernetes" {
  source = "./terraform-aws-kubernetes"

  aws_region           = "us-east-1"
  cluster_name         = "*****-kubernetes-staging"
  master_instance_type = "t2.large"
  worker_instance_type = "t2.large"
  ssh_public_key       = "${path.module}/pubkey.pub"
  ssh_access_cidr      = ["0.0.0.0/0"]
  api_access_cidr      = ["0.0.0.0/0"]
  min_worker_count     = 3
  max_worker_count     = 6
  hosted_zone          = aws_route53_zone.staging.name
  hosted_zone_private  = false

  master_subnet_id = aws_subnet.ops_vpc_public_subnets[0].id
  worker_subnet_ids = [
    aws_subnet.ops_vpc_public_subnets[0].id,
    aws_subnet.ops_vpc_public_subnets[1].id,
    aws_subnet.ops_vpc_public_subnets[2].id
  ]

  # Tags
  tags = {
    Application = "AWS-Kubernetes-staging"
  }

  # Tags in a different format for Auto Scaling Group
  tags2 = [
    {
      key                 = "Application"
      value               = "AWS-Kubernetes-staging"
      propagate_at_launch = true
    }
  ]

  addons = [
    "https://raw.githubusercontent.com/scholzj/terraform-aws-kubernetes/master/addons/storage-class.yaml",
    "https://raw.githubusercontent.com/scholzj/terraform-aws-kubernetes/master/addons/heapster.yaml",
    "https://raw.githubusercontent.com/scholzj/terraform-aws-kubernetes/master/addons/dashboard.yaml",
    "https://raw.githubusercontent.com/scholzj/terraform-aws-kubernetes/master/addons/external-dns.yaml",
    "https://raw.githubusercontent.com/scholzj/terraform-aws-kubernetes/master/addons/autoscaler.yaml",
    "https://raw.githubusercontent.com/scholzj/terraform-aws-kubernetes/master/addons/ingress.yaml"
  ]
}
scholzj commented 1 year ago

I just create type: LoadBalancer Kubernetes service and it creates a classic load balancer. I'm not sure if something changed in AWS, if your account has some different setup, or if simply ALB requires other rights ETB. I guess if you get some error you should try to address the issue it complains about and see if it helps. I'm afraid I do not have much time right no to do some investigation or testing myself, sorry.