terraform-aws-modules / terraform-aws-eks

Terraform module to create Amazon Elastic Kubernetes (EKS) resources 🇺🇦
https://registry.terraform.io/modules/terraform-aws-modules/eks/aws
Apache License 2.0
4.45k stars 4.06k forks source link

Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused #911

Closed vrathore18 closed 2 years ago

vrathore18 commented 4 years ago

I am started getting this issue:

Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused

  on .terraform/modules/eks/terraform-aws-eks-11.1.0/aws_auth.tf line 62, in resource "kubernetes_config_map" "aws_auth":
  62: resource "kubernetes_config_map" "aws_auth" {

All my code were working fine but as I upgraded my terraform versions, providers version. I started getting above issue.

version on which everything was working: provider:- aws: 2.49 kubernetes: 1.10.0 helm: 0.10.4 eks: 4.0.2

others:- terraform:0.11.13 kubectl: 1.11.7 aws-iam-authenticator:0.4.0-alpha.1

Now my versions terraform:0.12.26 kubectl: 1.16.8 aws-iam-authenticator:0.5.0

eks.yaml

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "12.1.0"

  cluster_name    = var.name
  subnets         = module.vpc.private_subnets
  vpc_id          = module.vpc.vpc_id
  cluster_version = var.cluster_version
  manage_aws_auth = "true"

  kubeconfig_aws_authenticator_additional_args = ["-r", "arn:aws:iam::${var.target_account_id}:role/terraform"]

  worker_groups = [
    {
      instance_type        = var.eks_instance_type
      asg_desired_capacity = var.eks_asg_desired_capacity
      asg_max_size         = var.eks_asg_max_size
      key_name             = var.key_name
    }
  ]

  map_accounts = [var.target_account_id]

  map_roles = [
    {
      rolearn = format("arn:aws:iam::%s:role/admin", var.target_account_id)
      username = format("%s-admin", var.name)
      groups    = ["system:masters"]
    }
  ]

  # don't write local configs, as we do it anyway
  write_kubeconfig      = "false"
}

resource "local_file" "kubeconfig" {
  content  = module.eks.kubeconfig
  filename = "./.kube_config.yaml"
}

In the above code write_kubeconfig = "false" and creating a local file kubeconfig. I am using this file in helm and kubernetes provider.

provider.yaml

`provider "aws" { region = var.region version = "~> 2.65.0"

assume_role { role_arn = "arn:aws:iam::${var.target_account_id}:role/terraform" } }

provider "kubernetes" { config_path = "./.kube_config.yaml" version = "~> 1.11.3" }

provider "helm" { version = "~> 1.2.2"

kubernetes { config_path = "./.kube_config.yaml" } }`

On terraform apply, script is not able to create module.eks.kubernetes_config_map.aws_auth[0]:

I tried some of the suggestion mentioned here but didn't worked for me https://github.com/terraform-aws-modules/terraform-aws-eks/issues/817

icicimov commented 2 years ago

@bryantbiggs please ignore my previous message I should have read your last message before replying, sorry about that :-/

For the others following on, using terraform plan -refresh=false .... as suggested in one of the issues Bryant linked to worked for me and the plan finished successfully upon subnets change in the module.

P.S. This of course is hardly an acceptable solution (more like a workaround) since there are for sure many other modules in people's project and running the plan with -refresh=false permanently (like in CI/CD pipeline) will not reflect any possible changes done to those modules that one would want to apply in the future

bryantbiggs commented 2 years ago

No worries

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.