hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.86k stars 9.21k forks source link

CodePipeline with dynamic actions always reports changes #14357

Open marcus-bcl opened 4 years ago

marcus-bcl commented 4 years ago

Community Note

Description

When updating an aws_codepipeline resource with dynamic actions that make use of run_order, the plan always reports changes.

Terraform CLI and Terraform AWS Provider Version

Terraform v0.12.24
+ provider.aws v2.70.0

Affected Resource(s)

Terraform Configuration Files

The following example defines a map of Terraform modules to plan and apply in a pipeline, with the order in which they should each be applied. Actions with the same order should run in parallel. A plan action and an apply action is created for each "module", and the run_order is calculated so that they all happen in the correct order.

This applies successfully however after running terraform plan again without making any code changes, Terraform thinks there are changes. This seems to be due to the actions being listed in the order they are generated in the code, rather than the order returned by the AWS API (which looks to be in pipeline order). Is there any way to ignore changes in the ordering of action blocks, if run_order is supplied?

locals {
  modules = {
    # key = name of module to plan+apply
    # value = module dependency order - module-b and module-c can run in parallel
    "module-a" = 0
    "module-b" = 1
    "module-c" = 1
    "module-d" = 2
  }
}

resource "aws_codepipeline" "pipeline" {
  name     = "test"
  role_arn = var.role_arn
  artifact_store {
    type     = "S3"
    location = var.artifacts_bucket
  }
  stage {
    name = "Source"
    action {
      name             = "Source"
      category         = "Source"
      owner            = "ThirdParty"
      provider         = "GitHub"
      version          = "1"
      output_artifacts = ["source"]
      configuration = {
        Owner  = var.github_owner
        Repo   = var.github_repo
        Branch = var.github_branch
      }
    }
  }

  stage {
    name = "Deploy"
    dynamic "action" {
      for_each = local.modules
      content {
        name             = "${action.key}-plan"
        input_artifacts  = ["source"]
        output_artifacts = ["${action.key}-tfplan"]
        category         = "Build"
        owner            = "AWS"
        provider         = "CodeBuild"
        version          = "1"
        run_order        = action.value * 2 + 0
        configuration = {
          ProjectName   = var.codebuild_plan_project
          PrimarySource = "source"
        }
      }
    }
    dynamic "action" {
      for_each = local.modules
      content {
        name            = "${action.key}-apply"
        input_artifacts = ["source", "${action.key}-tfplan"]
        category        = "Build"
        owner           = "AWS"
        provider        = "CodeBuild"
        version         = "1"
        run_order       = action.value * 2 + 1
        configuration = {
          ProjectName   = var.codebuild_apply_project
          PrimarySource = "source"
        }
      }
    }
  }
}

Expected Behavior

Terraform reports no changes.

Actual Behavior

Terraform reports changes.

erikpaasonen commented 4 years ago

This bug causes Terraform to unnecessarily throw away all in-progress executions in an unrecoverable manner, because the CodePipeline service supercedes any in-progress executions whenever the pipeline configuration is modified (even if effectively no changes were made). The pipeline must undergo an entirely new execution to e.g. retry a failed action. For teams trying to manage multiple versions deployed to multiple deployment rings (multiple pipeline executions running concurrently with each at different stages), this can be a big productivity sink.

mousedownmike commented 4 years ago

I'm running into this problem without using dynamic actions. It seems to be related to the value of the Action's run_order and how it's mapped to the current state. Here is what the UI looks like inside my Deploy Stage:

Screenshot from 2020-11-11 21-18-16

My plans consistently perform the following:

name = AdminApiChangeSet -> ApiChangeSetExecute
run_order = 1 -> 2

name = ApiChangeSetExecute -> AdminApiChangeSet
run_order = 2 -> 1

Here's a lightly redacted definition of my Deploy stage:

stage {
    name = "Deploy"
    action {
      run_order        = 1
      category         = "Deploy"
      name             = "ApiChangeSet"
      owner            = "AWS"
      provider         = "CloudFormation"
      version          = "1"
      input_artifacts  = [
        "BuildArtifact"]
      output_artifacts = [
        "ApiChangeSet"]
      configuration    = {
        ActionMode         = "CHANGE_SET_REPLACE"
        RoleArn            = aws_iam_role.cloudformation.arn
        StackName          = aws_cloudformation_stack.api.name
        ChangeSetName      = "ApiChangeSet"
        TemplatePath       = "BuildArtifact::api-packaged.yaml"
        Capabilities       = "CAPABILITY_IAM"
      }
    }
    action {
      run_order       = 2
      category        = "Deploy"
      name            = "ApiChangeSetExecute"
      owner           = "AWS"
      provider        = "CloudFormation"
      version         = "1"
      input_artifacts = [
        "ApiChangeSet"]
      configuration   = {
        ActionMode    = "CHANGE_SET_EXECUTE"
        RoleArn       = aws_iam_role.cloudformation.arn
        StackName     = aws_cloudformation_stack.api.name
        ChangeSetName = "ApiChangeSet"
        TemplatePath  = "BuildArtifact::api-packaged.yaml"
      }
    }

    action {
      run_order        = 1
      category         = "Deploy"
      name             = "AdminApiChangeSet"
      owner            = "AWS"
      provider         = "CloudFormation"
      version          = "1"
      input_artifacts  = [
        "BuildArtifact"]
      output_artifacts = [
        "AdminApiChangeSet"]
      configuration    = {
        ActionMode         = "CHANGE_SET_REPLACE"
        RoleArn            = aws_iam_role.cloudformation.arn
        StackName          = aws_cloudformation_stack.admin_api.name
        ChangeSetName      = "AdminApiChangeSet"
        TemplatePath       = "BuildArtifact::admin-api-packaged.yaml"
        Capabilities       = "CAPABILITY_IAM"
      }
    }
    action {
      run_order       = 2
      category        = "Deploy"
      name            = "AdminApiChangeSetExecute"
      owner           = "AWS"
      provider        = "CloudFormation"
      version         = "1"
      input_artifacts = [
        "BuildArtifact"]
      configuration   = {
        ActionMode    = "CHANGE_SET_EXECUTE"
        RoleArn       = aws_iam_role.cloudformation.arn
        StackName     = aws_cloudformation_stack.admin_api.name
        ChangeSetName = "AdminApiChangeSet"
        TemplatePath  = "BuildArtifact::admin-api-packaged.yaml"
      }
    }
  }
mousedownmike commented 4 years ago

After several rearrangements of my configuration, I've been able to prevent changes by ordering my actions as follows:

My actions listed in my previous comment are now in this order:

  stage {
    name = "Deploy"

    action {
      run_order        = 1
      category         = "Deploy"
      name             = "AdminApiChangeSet"
      ...
    }
    action {
      run_order        = 1
      category         = "Deploy"
      name             = "ApiChangeSet"
      ...
    }
    action {
      run_order       = 2
      category        = "Deploy"
      name            = "AdminApiChangeSetExecute"
      ...
    }
    action {
      run_order       = 2
      category        = "Deploy"
      name            = "ApiChangeSetExecute"
      ...
    }
  }

I don't know if there's any way to control the order with dynamic actions but this would seem to indicate that the state is not mapped in a way that can be reliably checked. Bug?

mattshep commented 4 years ago

Unfortunately, I'm unable to control this using the idea above from @mousedownmike - I am trying to use dynamic "action" blocks though, so that is likely the difference:

Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/aws v3.5.0

My stage definition looks like so:

  dynamic "stage" {
    for_each = length(var.codebuild_projects) > 0 ? [1] : []

    content {
      name = "Build"

      dynamic "action" {
        for_each = length(var.codebuild_projects) > 0 ? var.codebuild_projects : []

        content {
          name             = lookup(action.key, "action_name")
          category         = "Build"
          owner            = "AWS"
          provider         = "CodeBuild"
          version          = "1"
          run_order        = lookup(action.key, "run_order")
          namespace        = lookup(action.key, "namespace", null)
          input_artifacts  = ["source_output"]
          output_artifacts = contains(keys(action.key), "output_artifact") ? [action.key.output_artifact] : []

          configuration = {
            ProjectName = lookup(action.key, "codebuild_project")

            EnvironmentVariables = jsonencode([
              {
                name  = "REPOSITORY_URI"
                value = var.ecr_repository
                type  = "PLAINTEXT"
              },
              {
                name  = "IMAGE_TAG"
                value = each.key
                type  = "PLAINTEXT"
              }
            ])
          }
        }
      }
    }
  }

I have tried ordering the elements of my codebuild_projects set by run_order and then name in several different fashions, to no effect (it still wants to change the actions on each plan). This is an example of the codebuild_projects definition:

  codebuild_projects = [
    {
      run_order         = "10"
      codebuild_project = aws_codebuild_project.docker_build_and_push.name
      action_name       = "build"
      namespace         = "build"
    },
    {
      run_order         = "1"
      codebuild_project = aws_codebuild_project.create_cloudformation_artifact.name
      action_name       = "cloudformation_artifact"
      output_artifact   = "cloudformation_artifact"
    },
    {
      run_order         = "1"
      codebuild_project = aws_codebuild_project.manage_semantic_version.name
      action_name       = "version"
      namespace         = "version"
    }
  ]
grahamhar commented 3 years ago

I've just taken a look at trying to add a diff suppress function for this, but I've been hit by a bug in the terraform plugin sdk. As these only work for strings currently I can't see how this can be fixed. If one of the maintainers or anyone else can point me at another approach I'll happily take the work on to fix this.

Some background on why I was looking at a diff suppress. The AWS API although it isn't documented seems to return a default value for run_order set to 1, also it sorts the actions for a given stage first by run order, then by name. I intended to sort the "new" config in this way then compare to the "old" and if the same suppress the diff.

johannes-mathes commented 3 years ago

This bug is really troublesome. In my case, it's impossible to circumvent. I have a complex stage where multiple services are deployed each having muttiple steps. In terraform, there is a for-each per stage.

Terraform probably expands this to:

while AWS CodePipeline sorts it according the correct run order and returns:

Mornor commented 2 years ago

Do we have any update on this? I'm facing the same issue with the following code

resource "aws_codepipeline" "this" {
  name     = "${lookup(var.tags, "Environment", "")}-terraform-pipeline"
  role_arn = aws_iam_role.this.arn

  artifact_store {
    location = data.aws_s3_bucket.codepipeline_bucket.bucket
    type     = "S3"
  }

  dynamic "stage" {
    for_each = local.stages
    content {
      name = stage.value.name
      dynamic "action" {
        for_each = stage.value.action
        content {
          name             = action.value.name
          category         = action.value.category
          owner            = action.value.owner
          provider         = action.value.provider
          version          = action.value.version
          run_order        = action.value.run_order
          input_artifacts  = action.value.input_artifacts
          output_artifacts = action.value.output_artifacts
          configuration    = action.value.configuration
        }
      }
    }
  }
}

locals {
  stages = [{
    name = "Source"
    action = [{
      run_order        = 1
      category         = "Source"
      name             = "Source"
      owner            = "AWS"
      provider         = "CodeCommit"
      version          = "1"
      input_artifacts  = []
      output_artifacts = ["SourceArtifacts"]
      configuration = {
        BranchName           = "master"
        OutputArtifactFormat = "CODEBUILD_CLONE_REF"
        RepositoryName       = local.repo_name
        ProjectName          = null
      }
    }]
  }, {
    name = "dev"
    action = [{
      run_order        = 2
      category         = "Build"
      name             = "InitAndPlan"
      owner            = "AWS"
      provider         = "CodeBuild"
      version          = "1"
      input_artifacts  = ["SourceArtifacts"]
      output_artifacts = ["PlanArtifacts"]
      configuration = {
        BranchName           = null
        OutputArtifactFormat = null
        RepositoryName       = null
        ProjectName          = module.codebuild_tf_init_plan.name
      }
    }, {
      run_order        = 3
      category         = "Approval"
      name             = "Approve"
      owner            = "AWS"
      provider         = "Manual"
      version          = "1"
      input_artifacts  = []
      output_artifacts = []
      configuration = {
        BranchName           = null
        OutputArtifactFormat = null
        RepositoryName       = null
        ProjectName          = null
      }
    }]
  }]
}

Versions used

Terraform v1.1.7
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.6.0
Mornor commented 2 years ago

Well, after reaching out to Stackoverflow, it seems that I had to use a list instead of a set for my actions.

As per the docs a set is

a collection of unique values that do not have any secondary identifiers or ordering

Which I guess somehow conflicted with trying to set the order of the actions?

fireman777 commented 2 years ago

Hi All, faced the same issue. Does anyone have fixed or mitigated it? Is it possible to convert the action block to the list? Maybe someone has an example? Many thanks in advance.

pauliuspetka commented 2 years ago

Hi, this works for me:

module looks like:

resource "aws_codepipeline" "codepipeline" {
  for_each = var.code_pipeline
  name     = upper("${local.name_prefix}-${each.key}")
  role_arn = each.value["code_pipeline_role_arn"]

  artifact_store {
    type     = lookup(each.value, "artifact_store", null) == null ? "" : lookup(each.value.artifact_store, "type", "S3")
    location = lookup(each.value, "artifact_store", null) == null ? null : lookup(each.value.artifact_store, "artifact_bucket", null)
  }

  dynamic "stage" {
    for_each = lookup(each.value, "stages", {})
    iterator = stage
    content {
      name = lookup(stage.value, "name")
      dynamic "action" {
        for_each = lookup(stage.value, "actions", {}) //[stage.key]
        iterator = action
        content {
          name             = action.value["name"]
          category         = action.value["category"]
          owner            = action.value["owner"]
          provider         = action.value["provider"]
          version          = action.value["version"]
          run_order        = action.value["run_order"]
          input_artifacts  = lookup(action.value, "input_artifacts", null)
          output_artifacts = lookup(action.value, "output_artifacts", null)
          configuration    = action.value["configuration"]
          namespace        = lookup(action.value, "namespace", null)
        }
      }
    }
  }
}

and module call looks like:

module "code_pipeline" {
  count         = lower(var.environment) == "prod" ? 0 : 1
  source        = "../module_aws_codepipeline"
  project_name  = upper("PIPELINE")
  environment   = var.environment
  code_pipeline = local.code_pipeline
}

data "aws_codestarconnections_connection" "connection" {
  name = "repo"
}

locals {
  github_variables = [
    {
      name  = "BranchName"
      value = "#{SourceVariables.BranchName}"
      type  = "PLAINTEXT"
    },
    {
      name  = "CommitId"
      value = "#{SourceVariables.CommitId}"
      type  = "PLAINTEXT"
    },
    {
      name  = "CommitMessage"
      value = "#{SourceVariables.CommitMessage}"
      type  = "PLAINTEXT"
    },
    {
      name  = "AuthorDate"
      value = "#{SourceVariables.AuthorDate}"
      type  = "PLAINTEXT"
    },
  ]

  code_pipeline = {
    campaigns-aarp = {
      code_pipeline_role_arn = module.roles.role_arn["CODE-PIPELINE"]
      artifact_store = {
        type            = "S3"
        artifact_bucket = module.s3.bucket_name["build-artifacts"]
      }
      stages = {
        stage_1 = {
          name = "Source"
          actions = {
            action_1 = {
              run_order        = 1
              category         = "Source"
              name             = "AppSource"
              owner            = "AWS"
              provider         = "CodeStarSourceConnection"
              version          = "1"
              output_artifacts = ["AppArtifacts"]
              namespace        = "SourceVariables"
              configuration = {
                BranchName       = "nonprod"
                FullRepositoryId = "Repo/path"
                ConnectionArn    = data.aws_codestarconnections_connection.connection.arn
              }
            },
            action_2 = {
              run_order        = 1
              category         = "Source"
              name             = "InfrastructureSource"
              owner            = "AWS"
              provider         = "CodeStarSourceConnection"
              version          = "1"
              output_artifacts = ["InfraArtifacts"]
              namespace        = "InfraVariables"
              configuration = {
                BranchName       = "master"
                FullRepositoryId = "Repo/path"
                ConnectionArn    = data.aws_codestarconnections_connection.connection.arn
                DetectChanges    = false
              }
            }
          }
        }

        stage_2 = {
          name = "Build"
          actions = {
            action_1 = {
              run_order        = 2
              category         = "Build"
              name             = "Build"
              owner            = "AWS"
              provider         = "CodeBuild"
              version          = "1"
              input_artifacts  = ["AppArtifacts"]
              output_artifacts = ["BuildArtifacts"]
              namespace        = "BuildVariables"
              configuration = {
                ProjectName          = module.code_build.code_build_name["app"]
                EnvironmentVariables = jsonencode(local.github_variables)
              }
            }
          }
        }
        stage_3 = {
          name = "Deploy"
          actions = {
            action_1 = {
              run_order        = 3
              category         = "Build"
              name             = "Deploy"
              owner            = "AWS"
              provider         = "CodeBuild"
              version          = "1"
              input_artifacts  = ["InfraArtifacts"]
              output_artifacts = ["DeployArtifacts"]
              configuration = {
                ProjectName = module.code_build.code_build_name["infra]"]
                EnvironmentVariables = jsonencode(concat(local.github_variables, [{
                  name  = "DOCKER_TAG_NAME"
                  value = "#{BuildVariables.DOCKER_TAG_NAME}"
                  type  = "PLAINTEXT"
                }, ]))
              }
            }
          }
        }
      }
    }
  }
}
fireman777 commented 2 years ago

Thanks pauliuspetka.

fmarrero commented 2 years ago

I have the same issue.

I use a CodePipeline to build run a bunch of different TF modules in a specific order.

I dynamically generate the list of codepipeline_actions in my locals.

My terraform:

locals.tf

tf_apply_path        = "terraform/account-demos/
codebuild_tf_apply_dir_paths = {
    "remote-backend"       = local.tf_apply_path
    "s3"                   = local.tf_apply_path
    "dns/route53"          = local.tf_apply_path
    "networking/us-east-1" = local.tf_apply_path
    "app-pipeline"        = local.tf_apply_path
    "certificate-manager"  = local.tf_apply_path
  }
  codebuild_scripts_path = "scripts/codebuild-terraform"
  codepipeline_action_run_orders = {
    "remote-backend"       = 1
    "s3"                   =  1
    "dns/route53"          = 1
    "networking/us-east-1" = 1
    "app-pipeline"        = 1
    "certificate-manager"  = 2
  }
  codepipeline_actions = { for dir, path in local.codebuild_tf_apply_dir_paths : dir =>
    {
      name            = dir
      input_artifacts = ["app"]
      category        = "Build"
      owner           = "AWS"
      provider        = "CodeBuild"
      version         = "1"
      configuration = {
        ProjectName = replace(module.tf_apply_jobs_codebuild[dir].name, "/", "-")
      },
      run_order = lookup(local.codepipeline_action_run_orders, dir, null)
    }
  }

codepipline.tf

resource "aws_codepipeline" "codepipeline" {
  name     = "build-demo-resources"
  role_arn = aws_iam_role.codepipeline_role.arn

  artifact_store {
    location = local.pipeline_bucket_name
    type     = "S3"
  }

  stage {
    name = "terraform-stuff"
    action {
      category = "Source"
      owner    = "AWS"
      provider = "CodeStarSourceConnection"
      version  = "1"
      name     = local.source_repo
      configuration = {
        ConnectionArn    = "arn:aws:codestar-connections:us-east-1:123123123:connection/abc42534-f88d-46a3-b2ae-50a6a1185897"
        FullRepositoryId = "${local.source_owner}/${local.source_repo}"
        BranchName       = local.source_branch
      }
      output_artifacts = ["app"]
    }
  }

  stage {
    name = "deploy"
    dynamic "action" {
      for_each = local.codepipeline_actions
      content {
        input_artifacts  = ["app"]
        name             = replace(action.value.name, "/", "-")
        category         = action.value.category
        owner            = action.value.owner
        provider         = action.value.provider
        run_order        = action.value.run_order
        configuration    = action.value.configuration
        version          = "1"
        output_artifacts = []
      }
    }
  }

  lifecycle {
    ignore_changes = [stage[1]]
  }
}

I use an ignore_changes flag to prevent the false positive changes but this is not ideal (since their aren't really any changes)

juanpgomez-gsa commented 1 year ago

I am working on something similar would you be willing to share your progress and how you defined your stage? I am getting to a point where I continue to hit this error message when i work with the environment variables.

creating CodePipeline (dev-mgt-mytest-cp): ValidationException: 2 validation errors detected: Value at 'pipeline.stages.1.member.actions.1.member.configuration' failed to satisfy constraint: Map value must satisfy constraint: [Member must have length less than or equal to 50000, Member must have length greater than or equal to 1]; Value at 'pipeline.stages.2.member.actions.1.member.configuration' failed to satisfy constraint: Map value must satisfy constraint: [Member must have length less than or equal to 50000, Member must have a length greater than or equal to 1]

fireman777 commented 1 year ago

Hi @juanpgomez-gsa. I didn't face exactly your problem, but regarding the described above issue I can say that I've fixed this problem by rewriting the code. The main idea is that in the first place there would be a stage (or stages) with priority 1, on second place - stage (or stages) with priority 2, and so on. Then Terraform won't try re-place these stages with each terraform apply command execution.

jpgomez-cwn commented 1 year ago

@fireman777 could you share what type of variable declaration you used for the code_pipeline?

jpgomez-cwn commented 1 year ago

@fireman777 could you share what type of variable declaration you used for the code_pipeline?

Nevermind, I figured you use a map

variable "code_pipeline" {
  type        = map(any)
  description = "The pipeline variables that build out the stages"

}

Great post! I am going to create a sample GitHub repo for this. I have been looking for an answer to this for quite some time.!

fireman777 commented 1 year ago

Hi @jpgomez-cwn, I've tried to change the type of the variable that contains stages but didn't have luck. As I've discovered, the type doesn't fix the problem, but maybe that is just my case (for example, an old version of Terraform or provider). Anyway, I've placed stages in that order which correspondents to their run_order, and that helped me. If you have any decision, you can put it here, that can help a lot of people, I guess.

juanpgomez-gsa commented 1 year ago

Hi @jpgomez-cwn, I've tried to change the type of the variable that contains stages but didn't have luck. As I've discovered, the type doesn't fix the problem, but maybe that is just my case (for example, an old version of Terraform or provider). Anyway, I've placed stages in that order which correspondents to their run_order, and that helped me. If you have any decision, you can put it here, that can help a lot of people, I guess.

Hey fireman777 thanks for the feedback. I will say the type of variable significantly changes how this works. specifically when people are using the EnviornmentVariables configuration. The approach pauliuspetka provided is the only variable type i have seen work to consume the stages, actions and configurations with environment variables. Map seems to be the best fit for this variable type without reaching Terraform limits that other variable types produce.

baalimago commented 9 months ago

Workaround for the problem: transform your stage datamodel to something which maintains the order. Meaning: if you have a set, or an object, transform this into a list and then order the list in the run_order that you need.

Example (pseudoishcode), you have map:

local build_stage={"a": {run_order=1,...}, "b": {run_order=2, ... }, "c" {run_order=1, ...}}

Separate this into one list per run order:

local run_order_1=[
  for k in keys(local.build_stage) : { key = k, value = local.build_stage[k]} if key == "a" || key == "c" 
]
local run_order_2=[
  for k in keys(local.build_stage) : { key = k, value = local.build_stage[k]} if key == "b" 
]

Add the lists together in order:

local build_stage_ordered=concat(run_order_1, run_order_2)

Then you can re-use the build_stage_ordered as if it were a map by changing each.key -> each.value.key and each.value -> each.value.value, and keep the rest of the logic.