ljfranklin / terraform-resource

A concourse resource to create infrastructure via Terraform
MIT License
185 stars 85 forks source link

Terraform Concourse Resource

A Concourse resource that allows jobs to modify IaaS resources via Terraform. Useful for creating a pool of reproducible environments. No more snowflakes!

See DEVELOPMENT if you're interested in submitting a PR :+1:

Docker Pulls

Source Configuration

Important!: The source.storage field has been replaced by source.backend_type and source.backend_config to leverage the built-in Terraform backends. If you currently use source.storage in your pipeline, follow the instructions in the Backend Migration section to ensure your state files are not lost.

Source Example

resource_types:
- name: terraform
  type: docker-image
  source:
    repository: ljfranklin/terraform-resource
    tag: latest

resources:
  - name: terraform
    type: terraform
    source:
      env_name: staging
      backend_type: s3
      backend_config:
        bucket: mybucket
        key: mydir/terraform.tfstate
        region: us-east-1
        access_key: {{storage_access_key}}
        secret_key: {{storage_secret_key}}
      vars:
        tag_name: concourse
      env:
        AWS_ACCESS_KEY_ID: {{environment_access_key}}
        AWS_SECRET_ACCESS_KEY: {{environment_secret_key}}

The above example uses AWS S3 to store Terraform state files. All backend_config options documented here are forwarded straight to Terraform.

Terraform also supports many other state file backends, for example Google Cloud Storage (GCS):

resources:
  - name: terraform
    type: terraform
    source:
      backend_type: gcs
      backend_config:
        bucket: mybucket
        prefix: mydir
        credentials: {{gcp_credentials_json}}
      ...

Image Variants

Note: all images support AMD64 and ARM64 architectures, although only AMD64 is fully tested prior to release.

See Dockerhub for a list of all available tags. If you'd like to build your own image from a specific Terraform branch, configure a pipeline with build-image-pipeline.yml.

Behavior

This resource should usually be used with the put action rather than a get. This ensures the output always reflects the current state of the IaaS and allows management of multiple environments as shown below. A get step outputs the same metadata file format shown below for put.

Get Parameters

Note: In Concourse, a put is always followed by an implicit get. To pass get params via put, use put.get_params.

Put Parameters

Put Example

Every put action creates name and metadata files as an output containing the env_name and Terraform Outputs in JSON format.

jobs:
- name: update-infrastructure
  plan:
  - get: project-git-repo
  - put: terraform
    params:
      env_name: e2e
      terraform_source: project-git-repo/terraform
  - task: show-outputs
    config:
      platform: linux
      inputs:
        - name: terraform
      run:
        path: /bin/sh
        args:
          - -c
          - |
              echo "name: $(cat terraform/name)"
              echo "metadata: $(cat terraform/metadata)"

The preceding job would show a file similar to the following:

name: e2e
metadata: { "vpc_id": "vpc-123456", "vpc_tag_name": "concourse" }

Plan and apply example

jobs:
- name: terraform-plan
  plan:
  - get: project-git-repo
  - put: terraform
    params:
      env_name: staging
      terraform_source: project-git-repo/terraform
      plan_only: true
      vars:
        subnet_cidr: 10.0.1.0/24

- name: terraform-apply
  plan:
  - get: project-git-repo
    trigger: false
    passed: [terraform-plan]
  - get: terraform
    trigger: false
    passed: [terraform-plan]
  - put: terraform
    params:
      env_name: staging
      terraform_source: project-git-repo/terraform
      plan_run: true

Managing a single environment vs a pool of environments

This resource can be used to manage a single environment or a pool of many environments.

Single Environment

To use this resource to manage a single environment, set source.env_name or put.params.env_name to a fixed name like staging or production as shown in the previous put example. Each put will update the IaaS resources and state file for that environment.

Pool of Environments

To manage a pool of many environments, you can use this resource in combination with the pool-resource. This allows you to create a pool of identical environments that can be claimed and released by CI jobs and humans. Setting put.params.generate_random_name: true will create a random, unique env_name like "coffee-bee" for each environment, and the pool-resource will persist the name and metadata for these environments in a private git repo.

jobs:
- name: create-env-and-lock
  plan:
    # apply the terraform template with a random env_name
    - get: project-git-repo
    - put: terraform
      params:
        terraform_source: project-git-repo/terraform
        generate_random_name: true
        delete_on_failure: true
        vars:
          subnet_cidr: 10.0.1.0/24
    # create a new pool-resource lock containing the terraform output
    - put: locks
      params:
        add: terraform/

- name: claim-env-and-test
  plan:
    # claim a random env lock
    - put: locks
      params:
        acquire: true
    # the locks dir will contain `name` and `metadata` files described above
    - task: run-tests-against-env
      file: test.yml
      input_mapping:
        env: locks/

- name: destroy-env-and-lock
  plan:
    - get: project-git-repo
    # acquire a lock
    - put: locks
      params:
        acquire: true
    # destroy the IaaS resources
    - put: terraform
      params:
        terraform_source: project-git-repo/terraform
        env_name_file: locks/name
        action: destroy
      get_params:
        action: destroy
    # destroy the lock
    - put: locks
      params:
        remove: locks/

Backend Migration

Previous versions of this resource required statefiles to be stored in an S3-compatible blobstore using the source.storage field. The latest version of this resource instead uses the build-in Terraform Backends to support many other statefile storage options in addition to S3. If you have an existing pipeline that uses source.storage, your statefiles will need to be migrated into the new backend directory structure using the following steps:

  1. Rename source.storage to source.migrated_from_storage in your pipeline config. All fields within source.storage should remain unchanged, only the top-level key should be renamed.
  2. Add source.backend_type and source.backend_config fields as described under Source Configuration.
  3. Update your pipeline: fly set-pipeline.
  4. The next time your pipeline performs a put to the Terraform resource:
    • The resource will copy the statefile for the modified environment into the new directory structure.
    • The resource will rename the old statefile in S3 to $ENV_NAME.migrated.
  5. Once all statefiles have been migrated and everything is working as expected, you may:
    • Remove the old .migrated statefiles.
    • Remove the source.migrated_from_storage from your pipeline config.

Breaking Change: The backend mode drops support for feeding Terraform outputs back in as input vars to subsequent puts. This "feature" causes suprising errors if inputs and outputs have the same name but different types and the implementation was significantly more complicated with the new migrated_from_storage flow.

Legacy storage configuration

Migration Example

resources:
  - name: terraform
    type: terraform
    source:
      backend_type: s3
      backend_config:
        bucket: mybucket
        key: mydir/terraform.tfstate
        region: us-east-1
        access_key: {{storage_access_key}}
        secret_key: {{storage_secret_key}}
      migrated_from_storage:
        bucket: mybucket
        bucket_path: mydir/
        region: us-east-1
        access_key_id: {{storage_access_key}}
        secret_access_key: {{storage_secret_key}}
      vars:
        tag_name: concourse
      env:
        AWS_ACCESS_KEY_ID: {{environment_access_key}}
        AWS_SECRET_ACCESS_KEY: {{environment_secret_key}}