terraform-aws-modules / terraform-aws-vpc

Terraform module to create AWS VPC resources 🇺🇦
https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws
Apache License 2.0
2.97k stars 4.43k forks source link

Adding new VPC endpoints fails using vpc-endpoints module #771

Closed idelkysq closed 2 years ago

idelkysq commented 2 years ago

Description

Adding new VPC endpoints to an existing VPC using the last version of the vpc-endpoints module published on 11-Jan. I'm still getting the same error as I before mentioned (https://github.com/terraform-aws-modules/terraform-aws-vpc/issues/650#issuecomment-996812024)

Versions

Reproduction

Steps to reproduce the behavior:

  1. add one or more vpc endoints, like:

    service_catalog = {
      service             = "servicecatalog"
      tags                = { Name = "servicecatalog-vpc-endpoint" }
      private_dns_enabled = true
    },
    
    codecommit = {
      service             = "codecommit"
      tags                = { Name = "codecommit-vpc-endpoint" }
      private_dns_enabled = true
    },
    
    secret_manager = {
      service             = "secretsmanager"
      tags                = { Name = "secretsmanager-vpc-endpoint" }
      private_dns_enabled = true
    }
  2. The vpc endpoints should be attached to an specific SG
  3. Run terraform plan

Code Snippet to Reproduce

Actual behavior

After terraform plan, I'm still getting:

Error: Invalid index

  on .terraform/modules/vpc_endpoint/nested/vpc_endpoint_nested/main.tf line 21, in resource "aws_vpc_endpoint" "this":
  21:   service_name      = data.aws_vpc_endpoint_service.this[each.key].service_name
    |----------------
    | data.aws_vpc_endpoint_service.this is object with 9 attributes
    | each.key is "service_catalog"

The given key does not identify an element in this collection value.

ERRO[0045] 1 error occurred:
    * exit status 1

where our modules/vpc_endpoint/nested/vpc_endpoint_nested/main.tf is same as https://github.com/terraform-aws-modules/terraform-aws-vpc/blob/master/modules/vpc-endpoints/main.tf

using the for_each solution given in the commit 19fcf0d

Terminal Output Screenshot(s)

Screenshot 2022-03-28 at 15 04 49
bryantbiggs commented 2 years ago

it looks like you are nesting the vpc endpoint module into another module so without a full reproduction it will be difficult to diagnost what your exact issue is

idelkysq commented 2 years ago

Hi, sorry... I didn't attach all the code we use.

In fact, we have a terragrunt (File 1), with File 2 as source, which uses the vpc-endpoint terraform module (File 3)

< FILE 1 >

locals {
  layer_vars = read_terragrunt_config(find_in_parent_folders("layer.hcl"))
  client_vars = read_terragrunt_config(find_in_parent_folders("client.hcl"))
  environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))
  region_vars = read_terragrunt_config(find_in_parent_folders("region.hcl"))
  service_vars = read_terragrunt_config(find_in_parent_folders("service.hcl"))

  serv_name  = local.service_vars.locals.service_name
  client_acronym = local.client_vars.locals.client_acronym
  env_name = local.environment_vars.locals.environment_name
  region_short_name = local.region_vars.locals.aws_region_short
  vpc_cidr_block = local.service_vars.locals.vpc_cidr_block
  azs = local.service_vars.locals.azs
  environment_name  = local.environment_vars.locals.environment_name
  aws_region = local.region_vars.locals.aws_region
  layer_name = local.layer_vars.locals.tf_layer_name

  destination_cidr_block = local.service_vars.locals.tgw_route_destination_cidr_block
}

include {
  path = find_in_parent_folders()
}

terraform {
  source = "< FILE 2 >"
}

inputs = {

  ### VPC
  name  = "${local.serv_name}_${local.client_acronym}_${local.env_name}_${local.region_short_name}_vpc"
  cidr = local.vpc_cidr_block
  azs = local.azs
  private_subnets = [cidrsubnet(local.vpc_cidr_block, 1, 0), cidrsubnet(local.vpc_cidr_block, 1, 1)]
  enable_dns_hostnames = true
  enable_dns_support = true
  enable_flow_log = true
  enable_s3_endpoint = true
  enable_servicecatalog_endpoint = true
  create_flow_log_cloudwatch_log_group = true
  create_flow_log_cloudwatch_iam_role  = true

  flow_log_cloudwatch_log_group_retention_in_days = 365

  ### VPC Endpoint Security Group
  vpc_endpoint_sg_name   = "${local.environment_name}_${local.serv_name}_sg_vpc"
  description = "Controls access to Security Group VPC Endpoint Interfaces"
  use_name_prefix = false
  auto_egress_rules = []
  auto_ingress_with_self = []

  ### VPC Endpoints
  endpoints = {
    logs = {
      service = "logs"
      tags = { Name = "logs-vpc-endpoint" }
      private_dns_enabled = true
    },

    monitoring = {
      service = "monitoring"
      tags = { Name = "monitoring-vpc-endpoint" }
      private_dns_enabled = true
    },

    ssm = {
      service = "ssm"
      tags = { Name = "ssm-vpc-endpoint" }
      private_dns_enabled = true
    },

    notebook = {
      service_name = "aws.sagemaker.eu-west-1.notebook"
      tags = { Name = "notebook-vpc-endpoint" }
      private_dns_enabled = true
    },

    sagemaker_runtime = {
      service = "sagemaker.runtime"
      tags = { Name = "sagemaker-runtime-vpc-endpoint" }
      private_dns_enabled = true
    },

    sagemaker_api = {
      service = "sagemaker.api"
      tags = { Name = "sagemaker_api-vpc-endpoint" }
      private_dns_enabled = true
    },

    sts = {
      service = "sts"
      tags = { Name = "sts-vpc-endpoint" }
      private_dns_enabled = true
    },

    ecr_dkr = {
      service = "ecr.dkr"
      tags = { Name = "ecr-dkr-vpc-endpoint" }
      private_dns_enabled = true
    },

    ecr_api = {
      service = "ecr.api"
      tags = { Name = "ecr-api-vpc-endpoint" }
      private_dns_enabled = true
    },

    service_catalog = {
      service = "servicecatalog"
      tags = { Name = "servicecatalog-vpc-endpoint" }
      private_dns_enabled = true
    },

    codecommit = {
      service = "codecommit"
      tags = { Name = "codecommit-vpc-endpoint" }
      private_dns_enabled = true
    },

    secret_manager = {
      service = "secretsmanager"
      tags = { Name = "secretsmanager-vpc-endpoint" }
      private_dns_enabled = true
    }
  }

  tags = merge(
    local.layer_vars.locals.tags,
    local.environment_vars.locals.tags,
    local.region_vars.locals.tags,
    local.service_vars.locals.tags
  )
}

< FILE 2 >

# VPC
module "vpc" {

  source = "terraform_modules_external.git//terraform-aws-vpc"

  azs = var.azs
  cidr = var.cidr
  create_flow_log_cloudwatch_iam_role = var.create_flow_log_cloudwatch_iam_role
  create_flow_log_cloudwatch_log_group = var.create_flow_log_cloudwatch_log_group
  enable_dns_hostnames = var.enable_dns_hostnames
  enable_dns_support = var.enable_dns_support
  enable_flow_log = var.enable_flow_log
  enable_s3_endpoint  = var.enable_s3_endpoint
  flow_log_cloudwatch_log_group_retention_in_days = var.flow_log_cloudwatch_log_group_retention_in_days
  name = var.name
  private_subnets = var.private_subnets
}

# VPC Endpoint Security Group

module "vpc_endpoint_sg" {

  depends_on = [module.vpc]

  source = "terraform_modules_external.git//terraform-aws-security-group/modules/https-443/"

  auto_egress_rules = var.auto_egress_rules
  auto_ingress_with_self = var.auto_ingress_with_self
  description = var.description
  ingress_cidr_blocks = [module.vpc.vpc_cidr_block]
  name = var.vpc_endpoint_sg_name
  use_name_prefix = var.use_name_prefix
  vpc_id = module.vpc.vpc_id
}

# VPC Endpoint
module "vpc_endpoint" {

  depends_on = [module.vpc]

  source = "< FILE 3 >"

  endpoints = var.endpoints
  subnet_ids = module.vpc.private_subnets
  security_group_ids = [module.vpc_endpoint_sg.this_security_group_id]
  vpc_id = module.vpc.vpc_id
}

< FILE 3 > _terraformmodule for vpc-endpoints

################################################################################
# Endpoint(s)
################################################################################

data "aws_vpc_endpoint_service" "this" {
  for_each = { for k, v in var.endpoints : k => v if var.create }

  service = lookup(each.value, "service", null)
  service_name = lookup(each.value, "service_name", null)

  filter {
    name   = "service-type"
    values = [lookup(each.value, "service_type", "Interface")]
  }
}

resource "aws_vpc_endpoint" "this" {
  for_each = { for k, v in var.endpoints : k => v if var.create }

  vpc_id = var.vpc_id
  service_name = data.aws_vpc_endpoint_service.this[each.key].service_name
  vpc_endpoint_type = lookup(each.value, "service_type", "Interface")
  auto_accept = lookup(each.value, "auto_accept", null)

  security_group_ids  = lookup(each.value, "service_type", "Interface") == "Interface" ? distinct(concat(var.security_group_ids, lookup(each.value, "security_group_ids", []))) : null
  subnet_ids = lookup(each.value, "service_type", "Interface") == "Interface" ? distinct(concat(var.subnet_ids, lookup(each.value, "subnet_ids", []))) : null
  route_table_ids = lookup(each.value, "service_type", "Interface") == "Gateway" ? lookup(each.value, "route_table_ids", null) : null
  policy = lookup(each.value, "policy", null)
  private_dns_enabled = lookup(each.value, "service_type", "Interface") == "Interface" ? lookup(each.value, "private_dns_enabled", null) : null

  tags = merge(var.tags, lookup(each.value, "tags", {}))

  timeouts {
    create = lookup(var.timeouts, "create", "10m")
    update = lookup(var.timeouts, "update", "10m")
    delete = lookup(var.timeouts, "delete", "10m")
  }
}

The error in the previous message was when running terragrunt plan

Do you need any other info in this case? Thank you!

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

github-actions[bot] commented 2 years ago

This issue was automatically closed because of stale in 10 days

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.