hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.71k stars 9.55k forks source link

mock provider crashes when tried to override the output of a submodule #35409

Closed ankitjhingan closed 4 months ago

ankitjhingan commented 4 months ago

I want to implement a unit test (plan) on the main module that further calls an external submodule where I just want to mock the returned output values using overrides from the external submodule without making changes in my real code

Terraform Version

terraform --version     
Terraform v1.8.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.56.1
+ provider registry.terraform.io/hashicorp/cloudinit v2.3.4
+ provider registry.terraform.io/hashicorp/null v3.2.2

Terraform Configuration Files

trying to override the output of a module with additional dummy attribute in the map object module/managed-node-group/tests/module_override.tftest.hcl


mock_provider "aws" {
   alias = "mock"
}

run "verify_node_groups" {
  providers = {
    aws = aws.mock
  }
  variables {
    mng = {
    cluster_name    = "eks"
    cluster_version = "1.2"
    subnet_ids      = ["subnet1a1233","subnet1a1234","subnet1a1234"]
    node_security_group_id = "sg-123456"
    cluster_primary_security_group_id = "sg-123456"
    owner           = "owner@mail.com.au"
    company         = "global"
    bu              = "subscription"
    env_type        = "prod"
    cluster_type    = "pluto"
  }
  additional_labels = {
    workloadType = "generic"
  }
  node_groups = {
    ncgt-spot-ng = {
      min_size      = 1
      max_size      = 10
      desired_size  = 5
      use_custom_launch_template = false
      create_launch_template = false
      launch_template_version    = "1"
      instance_types = ["m5a.4xlarge", "m5.4xlarge"]
      capacity_type = "SPOT"
      labels = {
        capacity_type = "spot"
      }
    },
    ncgt-ondemand-ng = {
      min_size      = 1
      max_size      = 10
      desired_size  = 1
      use_custom_launch_template = false
      create_launch_template = false
      launch_template_version    = "1"
      instance_types = ["m5d.4xlarge"]
      capacity_type = "ON_DEMAND"
      labels = {
        capacity_type = "ondemand"
      }
    }
  }
  iam_role_arn = "arn:12345:blah"
 }

 override_module {
    target =  module.eks_managed_node_group
    outputs = {"global-ondemand-ng":{"dummy":"value","autoscaling_group_schedule_arns":{},"iam_role_arn":"arn:12345:blah","iam_role_name":null,"iam_role_unique_id":null,"launch_template_arn":null,"launch_template_id":null,"launch_template_latest_version":null,"launch_template_name":null,"node_group_arn":"o5n3370n","node_group_autoscaling_group_names":[],"node_group_id":"h8afoxgr","node_group_labels":{"ManagedBy":"Terraform","capacity_type":"ondemand","workloadType":"generic"},"node_group_resources":[],"node_group_status":"iuxahusz","node_group_taints":[],"platform":"linux"},"global-spot-ng":{"dummy":"value","autoscaling_group_schedule_arns":{},"iam_role_arn":"arn:12345:blah","iam_role_name":null,"iam_role_unique_id":null,"launch_template_arn":null,"launch_template_id":null,"launch_template_latest_version":null,"launch_template_name":null,"node_group_arn":"up4ytrrz","node_group_autoscaling_group_names":[],"node_group_id":"gckfw7td","node_group_labels":{"ManagedBy":"Terraform","capacity_type":"spot","workloadType":"generic"},"node_group_resources":[],"node_group_status":"sakk5jpy","node_group_taints":[],"platform":"linux"}}

  }

 assert {
    condition     = length(keys(module.eks_managed_node_group)) == 3
    error_message = "Expected Labels : ${jsonencode(module.eks_managed_node_group)}"
  }
}

```hcl

### Debug Output

terraform test --verbose
tests/module_override.tftest.hcl... in progress

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
Please report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version, the stack trace
shown below, and any additional information which may help replicate the issue.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

panic: value for module.eks_managed_node_group["global-ondemand-ng"].module.user_data.var.create was requested before it was provided
goroutine 1238 [running]:
runtime/debug.Stack()
        /Users/runner/hostedtoolcache/go/1.22.1/x64/src/runtime/debug/stack.go:24 +0x64
github.com/hashicorp/terraform/internal/logging.PanicHandler()
        /Users/runner/work/terraform/terraform/internal/logging/panic.go:84 +0x198
panic({0x103144b40?, 0x140009c6850?})
        /Users/runner/hostedtoolcache/go/1.22.1/x64/src/runtime/panic.go:770 +0x124
github.com/hashicorp/terraform/internal/namedvals.(*values[...]).GetExactResult(0x103849340, {{0x14001e6e0a0, 0x1, 0x1}, {{}, {0x1400097d180, 0xf}}})
        /Users/runner/work/terraform/terraform/internal/namedvals/values.go:88 +0x180
github.com/hashicorp/terraform/internal/namedvals.(*State).GetLocalValue(0x1032881a0?, {{0x14001e6e0a0, 0x1, 0x1}, {{}, {0x1400097d180, 0xf}}})
        /Users/runner/work/terraform/terraform/internal/namedvals/state.go:77 +0xe8
github.com/hashicorp/terraform/internal/terraform.(*evaluationStateData).GetLocalValue(0x140035c0750, {{}, {0x1400097d180?, 0x0?}}, {{0x14000a7ecd0, 0x50}, {0x1e8, 0xb, 0x4f59}, {0x1e8, ...}})
        /Users/runner/work/terraform/terraform/internal/terraform/evaluate.go:328 +0x17c
github.com/hashicorp/terraform/internal/lang.(*Scope).evalContext(0x140035c07e0, {0x140000c9d60, 0x1, 0x1}, {0x0, 0x0})
        /Users/runner/work/terraform/terraform/internal/lang/eval.go:391 +0x98c
github.com/hashicorp/terraform/internal/lang.(*Scope).EvalContext(...)
        /Users/runner/work/terraform/terraform/internal/lang/eval.go:245
github.com/hashicorp/terraform/internal/lang.(*Scope).EvalExpr(0x140035c07e0, {0x10382c4b0, 0x140003520e0}, {{0x10382bc08?, 0x140000114f0?}})
        /Users/runner/work/terraform/terraform/internal/lang/eval.go:170 +0x8c
github.com/hashicorp/terraform/internal/terraform.(*BuiltinEvalContext).EvaluateExpr(0x6f6c206f74207d4e?, {0x10382c4b0, 0x140003520e0}, {{0x10382bc08?, 0x140000114f0?}}, {0x0?, 0x0?})
        /Users/runner/work/terraform/terraform/internal/terraform/eval_context_builtin.go:322 +0x84
github.com/hashicorp/terraform/internal/terraform.evaluateCountExpressionValue({0x10382c4b0, 0x140003520e0}, {0x10384bc68?, 0x140018aeff0?})
        /Users/runner/work/terraform/terraform/internal/terraform/eval_count.go:71 +0x74
github.com/hashicorp/terraform/internal/terraform.evaluateCountExpression({0x10382c4b0, 0x140003520e0}, {0x10384bc68?, 0x140018aeff0?}, 0x0)
        /Users/runner/work/terraform/terraform/internal/terraform/eval_count.go:31 +0x38
github.com/hashicorp/terraform/internal/terraform.(*NodeAbstractResource).writeResourceState(0x140040f6000, {0x10384bc68, 0x140018aeff0}, {{}, {0x14001e6e0a0, 0x1, 0x1}, {{}, 0x44, {0x140008f35d8, ...}, ...}})
        /Users/runner/work/terraform/terraform/internal/terraform/node_resource_abstract.go:424 +0x150
github.com/hashicorp/terraform/internal/terraform.(*nodeExpandPlannableResource).expandResourceInstances(0x14000c52c00, {0x10384bc68, 0x14001ea2e10}, {{}, {0x14001e6e0a0, 0x1, 0x1}, {{}, 0x44, {0x140008f35d8, ...}, ...}}, ...)
        /Users/runner/work/terraform/terraform/internal/terraform/node_resource_plan.go:474 +0xf8
github.com/hashicorp/terraform/internal/terraform.(*nodeExpandPlannableResource).DynamicExpand(0x14000c52c00, {0x10384bc68, 0x14001ea2e10})
        /Users/runner/work/terraform/terraform/internal/terraform/node_resource_plan.go:198 +0x450
github.com/hashicorp/terraform/internal/terraform.(*Graph).walk.func1({0x10374b2c0, 0x14000c52c00})
        /Users/runner/work/terraform/terraform/internal/terraform/graph.go:122 +0x5f8
github.com/hashicorp/terraform/internal/dag.(*Walker).walkVertex(0x14000c0c600, {0x10374b2c0, 0x14000c52c00}, 0x140008f87c0)
        /Users/runner/work/terraform/terraform/internal/dag/walk.go:384 +0x2a8
created by github.com/hashicorp/terraform/internal/dag.(*Walker).Update in goroutine 1026
        /Users/runner/work/terraform/terraform/internal/dag/walk.go:307 +0xc30

Expected Behavior

it should return the output of the target module with the additonal dummy map of object

{"global-ondemand-ng":{"dummy":"value","autoscaling_group_schedule_arns":{},"iam_role_arn":"arn:12345:blah","iam_role_name":null,"iam_role_unique_id":null,"launch_template_arn":null,"launch_template_id":null,"launch_template_latest_version":null,"launch_template_name":null,"node_group_arn":"o5n3370n","node_group_autoscaling_group_names":[],"node_group_id":"h8afoxgr","node_group_labels":{"ManagedBy":"Terraform","capacity_type":"ondemand","workloadType":"generic"},"node_group_resources":[],"node_group_status":"iuxahusz","node_group_taints":[],"platform":"linux"},"global-spot-ng":{"dummy":"value","autoscaling_group_schedule_arns":{},"iam_role_arn":"arn:12345:blah","iam_role_name":null,"iam_role_unique_id":null,"launch_template_arn":null,"launch_template_id":null,"launch_template_latest_version":null,"launch_template_name":null,"node_group_arn":"up4ytrrz","node_group_autoscaling_group_names":[],"node_group_id":"gckfw7td","node_group_labels":{"ManagedBy":"Terraform","capacity_type":"spot","workloadType":"generic"},"node_group_resources":[],"node_group_status":"sakk5jpy","node_group_taints":[],"platform":"linux"}}

along with the successful plan output

terraform test --verbose
tests/module_override.tftest.hcl... in progress
  run "verify_node_groups"... pass

# module.eks_managed_node_group["global-ondemand-ng"].data.aws_caller_identity.current:
data "aws_caller_identity" "current" {
    account_id = "wsopjgux"
    arn        = "l8oohf13"
    id         = "llvvupsq"
    user_id    = "kxuw57dp"
}

# module.eks_managed_node_group["global-ondemand-ng"].data.aws_partition.current:
data "aws_partition" "current" {
    dns_suffix         = "cwa1bhxi"
    id                 = "i5rv8ayz"
    partition          = "bjf84rer"
    reverse_dns_prefix = "jv7rimhv"
}

# module.eks_managed_node_group["global-ondemand-ng"].aws_eks_node_group.this[0]:
resource "aws_eks_node_group" "this" {
    ami_type               = "chljtpdo"
    arn                    = "anok0xla"
    capacity_type          = "ON_DEMAND"
    cluster_name           = "eks"
    disk_size              = 0
    id                     = "1724g8ix"
    instance_types         = [
        "m5d.4xlarge",
    ]
    labels                 = {
        "ManagedBy"     = "Terraform"
        "capacity_type" = "ondemand"
        "workloadType"  = "generic"
    }
    node_group_name        = "0weavmio"
    node_group_name_prefix = "global-ondemand-ng-"
    node_role_arn          = "arn:12345:blah"
    release_version        = "3xwgynju"
    resources              = []
    status                 = "ldf9r2nb"
    subnet_ids             = [
        "subnet1a1233",
        "subnet1a1234",
    ]
    tags                   = {
        "Name"         = "global-ondemand-ng"
        "bu"           = "subscription"
        "cluster_type" = "pluto"
        "company"      = "global"
        "environment"  = "prod"
        "owner"        = "owner@mail.com.au"
        "product"      = "global-subscription-prod-pluto-eks-mng"
    }
    tags_all               = {}
    version                = "1.2"

    scaling_config {
        desired_size = 1
        max_size     = 10
        min_size     = 1
    }

    timeouts {}

    update_config {
        max_unavailable_percentage = 33
    }
}

# module.eks_managed_node_group["global-ondemand-ng"].module.user_data.null_resource.validate_cluster_service_cidr:
resource "null_resource" "validate_cluster_service_cidr" {
    id = "768393958574934274"
}
# module.eks_managed_node_group["global-spot-ng"].data.aws_caller_identity.current:
data "aws_caller_identity" "current" {
    account_id = "kzpa6i06"
    arn        = "song3144"
    id         = "8msignv4"
    user_id    = "k5xt5a59"
}

# module.eks_managed_node_group["global-spot-ng"].data.aws_partition.current:
data "aws_partition" "current" {
    dns_suffix         = "l0xgijup"
    id                 = "rjdgmm7n"
    partition          = "fb4zzac1"
    reverse_dns_prefix = "zrgq7zx5"
}

# module.eks_managed_node_group["global-spot-ng"].aws_eks_node_group.this[0]:
resource "aws_eks_node_group" "this" {
    ami_type               = "3fkmdudx"
    arn                    = "2d9pjhv8"
    capacity_type          = "SPOT"
    cluster_name           = "eks"
    disk_size              = 0
    id                     = "ktl7krxq"
    instance_types         = [
        "m5a.4xlarge",
        "m5.4xlarge",
    ]
    labels                 = {
        "ManagedBy"     = "Terraform"
        "capacity_type" = "spot"
        "workloadType"  = "generic"
    }
    node_group_name        = "baurem29"
    node_group_name_prefix = "global-spot-ng-"
    node_role_arn          = "arn:12345:blah"
    release_version        = "oot5bf4d"
    resources              = []
    status                 = "52tooq5u"
    subnet_ids             = [
        "subnet1a1233",
        "subnet1a1234",
    ]
    tags                   = {
        "Name"         = "global-spot-ng"
        "bu"           = "subscription"
        "cluster_type" = "pluto"
        "company"      = "global"
        "environment"  = "prod"
        "owner"        = "owner@mail.com.au"
        "product"      = "global-subscription-prod-pluto-eks-mng"
    }
    tags_all               = {}
    version                = "1.2"

    scaling_config {
        desired_size = 5
        max_size     = 10
        min_size     = 1
    }

    timeouts {}

    update_config {
        max_unavailable_percentage = 33
    }
}

# module.eks_managed_node_group["global-spot-ng"].module.user_data.null_resource.validate_cluster_service_cidr:
resource "null_resource" "validate_cluster_service_cidr" {
    id = "5055151943602182367"
}

Outputs:

eks_managed_node_group = {
    global-ondemand-ng = {
        autoscaling_group_schedule_arns    = {}
        iam_role_arn                       = "arn:12345:blah"
        node_group_arn                     = "anok0xla"
        node_group_autoscaling_group_names = []
        node_group_id                      = "1724g8ix"
        node_group_labels                  = {
            "ManagedBy"     = "Terraform"
            "capacity_type" = "ondemand"
            "workloadType"  = "generic"
        }
        node_group_resources               = []
        node_group_status                  = "ldf9r2nb"
        node_group_taints                  = []
        platform                           = "linux"
    }
    global-spot-ng     = {
        autoscaling_group_schedule_arns    = {}
        iam_role_arn                       = "arn:12345:blah"
        node_group_arn                     = "2d9pjhv8"
        node_group_autoscaling_group_names = []
        node_group_id                      = "ktl7krxq"
        node_group_labels                  = {
            "ManagedBy"     = "Terraform"
            "capacity_type" = "spot"
            "workloadType"  = "generic"
        }
        node_group_resources               = []
        node_group_status                  = "52tooq5u"
        node_group_taints                  = []
        platform                           = "linux"
    }
}

tests/module_override.tftest.hcl... tearing down
tests/module_override.tftest.hcl... pass

Actual Behavior

terraform test --verbose tests/module_override.tftest.hcl... in progress

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform. Please report the crash with Terraform1 so that we can fix this.

When reporting bugs, please include your terraform version, the stack trace shown below, and any additional information which may help replicate the issue.

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Steps to Reproduce

terraform init terraform test

Additional Context

here is the folder structure

module
    └── managed-node-group
        ├── main.tf
        ├── output.tf
        ├── tests
        │   ├── labels.tftest.hcl
        │   └── node_group.tftest.hcl
        └── variables.tf

here is the caller module eks_managed_node_group, that calls the submodule `

Managed Node Group


module "eks_managed_node_group" {
  source          = "terraform-aws-modules/eks/aws//modules/eks-managed-node-group"

  for_each        = var.node_groups
  name            = each.key
  cluster_name    = var.mng.cluster_name
  create_iam_role = false
  iam_role_additional_policies = {
    AmazonEC2ContainerRegistryReadOnly = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  }

  block_device_mappings = {
    xvda = {
      device_name = "/dev/xvda"
      ebs         = {
        volume_size           = 20
        volume_type           = "gp3"
        encrypted             = false
        delete_on_termination = true
      }
    }
  }

  use_custom_launch_template        = each.value.use_custom_launch_template
  use_custom_launch_template        = each.value.use_custom_launch_template
  launch_template_version           = each.value.launch_template_version
  cluster_version                   = var.mng.cluster_version
  cluster_service_cidr              = var.service_cidr
  subnet_ids                        = var.mng.subnet_ids
  cluster_primary_security_group_id = var.mng.cluster_primary_security_group_id
  vpc_security_group_ids            = [var.mng.node_security_group_id]
  min_size                          = each.value.min_size
  max_size                          = each.value.max_size
  desired_size                      = each.value.desired_size
  instance_types                    = each.value.instance_types
  capacity_type                     = each.value.capacity_type
  labels                            = merge(local.labels, each.value.labels)
  tags                              = local.common_tags
  iam_role_arn                  = var.iam_role_arn

}```

### References
I am using the below submodule
https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules/eks-managed-node-group
liamcervante commented 4 months ago

Hi @ankitjhingan, thanks for filing this!

I think this a duplicate of either https://github.com/hashicorp/terraform/issues/35097 or https://github.com/hashicorp/terraform/issues/35019. Are you able to upgrade to v1.8.3 or later and try again? I think this will have been fixed in later versions.

Thanks!

liamcervante commented 4 months ago

I will close this as a duplicate. Reply here and we can reopen if upgrading doesn't fix this crash for you.

github-actions[bot] commented 3 months ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.