radekg / terraform-provisioner-ansible

Ansible with Terraform 0.14.x
Apache License 2.0
572 stars 100 forks source link
ansible plugin provisioning terraform

Ansible provisioner for Terraform

CircleCI codecov

Ansible with Terraform 0.13.x - remote and local provisioners.

General overview

The purpose of the provisioner is to provide an easy method for running Ansible to configure hosts created with Terraform.

This provisioner, however, is not designed to handle all possible Ansible use cases. Lets consider what's possible and what's not possible with this provisioner.

For after provisioning, you may find the following Ansible module useful if you use AWS S3 for state storage: terraform-state-ansible-module.

What's possible

What's not possible

The provisioner by no means attempts to implement all Ansible use cases. The provisioner is not intended to be used as a jump host. For example, the remote mode does not allow provisioning hosts other than the one where Ansible is executed. The number of use cases and possibilities covered by Ansible is so wide that having to strive for full support is a huge undertaking for one person. Using the provisioner with a null_resource provides further options for passing the Ansible inventory, including dynamic inventory, to meet use cases not addressed when used with a compute resource.

If you find yourself in need of executing Ansible against well specified, complex inventories, either follow the regular process of provisoning hosts via Terraform and executing Ansible against them as a separate step, or initate the Ansible execution as the last Terraform task using null_resource and depends_on. Of course, pull requests are always welcomed!

Installation

Using Docker

$ cd /my-terraform-project
$ docker run -it --rm -v $PWD:$PWD -w $PWD radekg/terraform-ansible:latest init
$ docker run -it --rm -v $PWD:$PWD -w $PWD radekg/terraform-ansible:latest apply

Local Installation

Note that although terraform-provisioner-ansible is in the terraform registry, it cannot be installed using a module terraform stanza, as such a configuration will not cause terraform to download the terraform-provisioner-ansible binary.

Prebuilt releases are available on GitHub. Download a release for the version you require and place it in ~/.terraform.d/plugins directory, as documented here.

Caution: you will need to rename the file to match the pattern recognized by Terraform: terraform-provisioner-ansible_v<version>.

Alternatively, you can download and deploy an existing release using the following script:

curl -sL \
  https://raw.githubusercontent.com/radekg/terraform-provisioner-ansible/master/bin/deploy-release.sh \
  --output /tmp/deploy-release.sh
chmod +x /tmp/deploy-release.sh
/tmp/deploy-release.sh -v <version number>
rm -rf /tmp/deploy-release.sh

Configuration

Example:

resource "aws_instance" "test_box" {
  # ...
  connection {
    host = "..."
    user = "centos"
  }
  provisioner "ansible" {
    plays {
      playbook {
        file_path = "/path/to/playbook/file.yml"
        roles_path = ["/path1", "/path2"]
        force_handlers = false
        skip_tags = ["list", "of", "tags", "to", "skip"]
        start_at_task = "task-name"
        tags = ["list", "of", "tags"]
      }
      # shared attributes
      enabled = true
      hosts = ["zookeeper"]
      groups = ["consensus"]
      become = false
      become_method = "sudo"
      become_user = "root"
      diff = false
      extra_vars = {
        extra = {
          variables = {
            to = "pass"
          }
        }
      }
      forks = 5
      inventory_file = "/optional/inventory/file/path"
      limit = "limit"
      vault_id = ["/vault/password/file/path"]
      verbose = false
    }
    plays {
      module {
        module = "module-name"
        args = {
          "arbitrary" = "arguments"
        }
        background = 0
        host_pattern = "string host pattern"
        one_line = false
        poll = 15
      }
      # shared attributes
      # enabled = ...
      # ...
    }
    plays {
      galaxy_install {
        force = false
        server = "https://optional.api.server"
        ignore_certs = false
        ignore_errors = false
        keep_scm_meta = false
        no_deps = false
        role_file = "/path/to/role/file"
        roles_path = "/optional/path/to/the/directory/containing/your/roles"
        verbose = false
      }
      # shared attributes other than:
      # enabled = ...
      # are NOT taken into consideration for galaxy_install
    }
    defaults {
      hosts = ["eu-central-1"]
      groups = ["platform"]
      become_method = "sudo"
      become_user = "root"
      extra_vars = {
        extra = {
          variables = {
            to = "pass"
          }
        }
      }
      forks = 5
      inventory_file = "/optional/inventory/file/path"
      limit = "limit"
      vault_id = ["/vault/password/file/path"]
    }
    ansible_ssh_settings {
      connect_timeout_seconds = 10
      connection_attempts = 10
      ssh_keyscan_timeout = 60
      insecure_no_strict_host_key_checking = false
      insecure_bastion_no_strict_host_key_checking = false
      user_known_hosts_file = ""
      bastion_user_known_hosts_file = ""
    }
    remote {
      use_sudo = true
      skip_install = false
      skip_cleanup = false
      install_version = ""
      local_installer_path = ""
      remote_installer_directory = "/tmp"
      bootstrap_directory = "/tmp"
    }
  }
}
resource "aws_instance" "test_box" {
  # ...
}

resource "null_resource" "test_box" {
  depends_on = "aws_instance.test_box"
  connection {
    host = "${aws_instance.test_box.0.public_ip}"
    private_key = "${file("./test_box")}"
  }
  provisioner "ansible" {
    plays {
      playbook {
        file_path = "/path/to/playbook/file.yml"
        roles_path = ["/path1", "/path2"]
        force_handlers = false
        skip_tags = ["list", "of", "tags", "to", "skip"]
        start_at_task = "task-name"
        tags = ["list", "of", "tags"]
      }
      hosts = ["aws_instance.test_box.*.public_ip"]
      groups = ["consensus"]
    }
  }
}

Plays

Selecting what to run

Each plays must contain exactly one playbook or module. Define multiple plays when more than one Ansible action shall be executed against a host.

Playbook attributes

Module attributes

Galaxy Install attributes

Plays attributes

Defaults

Some of the plays settings might be common across multiple plays. Such settings can be provided using the defaults attribute. Any setting from the following list can be specified in defaults:

None of the boolean attributes can be specified in defaults. Neither playbook nor module can be specified in defaults.

Ansible SSH settings

Following settings apply to local provisioning only:

Remote

The existence of this resource enables remote provisioning. To use remote provisioner with its default settings, simply add remote {} to your provisioner.

Examples

Working examples.

Usage

The provisioner does not support passwords. It is possible to add password support for:

However, local provisioner with bastion currently rely on executing an Ansible command with SSH -o ProxyCommand, this would require putting the password on the terminal. For consistency, consider no password support.

Local provisioner: SSH details

Local provisioner requires the resource.connection with, at least, the user defined. After the bootstrap, the plugin will inspect the connection info, check if the user and private_key are set and that provisioning succeeded, indeed, by checking the host (which should be an ip address of the newly created instance). If the connection info does not provide the SSH private key, ssh agent mode is assumed.

In the process of doing so, a temporary inventory will be created for the newly created host, the pem file will be written to a temp file and a temporary known_hosts file will be created. Temporary known_hosts and temporary pem are per provisioner run, inventory is created for each plays. Files are cleaned up after the provisioner finishes or fails. Inventory will be removed only if not supplied with inventory_file.

Local provisioner: host and bastion host keys

Because the provisioner executes SSH commands outside of itself, via Ansible command line tools, the provisioner must construct a temporary SSH known_hosts file to feed to Ansible. There are two possible scenarios.

Host without a bastion

  1. If connection.host_key is used, the provisioner will use the provided host key to construct the temporary known_hosts file.
  2. If connection.host_key is not given or empty, the provisioner will attempt a connection to the host and retrieve first host key returned during the handshake (similar to ssh-keyscan but using Golang SSH).

Host with bastion

This is a little bit more involved than the previous case.

  1. If connection.bastion_host_key is provided, the provisioner will use the provided bastion host key for the known_hosts file.
  2. If connection.bastion_host_key is not given or empty, the provisioner will attempt a connection to the bastion host and retrieve first host key returned during the handshake (similar to ssh-keyscan but using Golang SSH).

However, Ansible must know the host key of the target host where the bootstrap actually happens. If connection.host_key is provided, the provisioner will simply use the provieded value. But, if no connection.host_key is given (or empty), the provisioner will open an SSH connection to the bastion host and perform an ssh-keyscan operation against the target host on the bastion host.

In the ssh-keyscan case, the bastion host must:

Compute resource local provisioner: hosts and groups

The plays.hosts and defaults.hosts attributes can be used with local provisioner. When used with a compute resource only the first defined host will be used when generating the inventory file and additional hosts will be ignored. If plays.hosts or defaults.hosts is not specified, the provisioner uses the public IP address of the Terraform provisioned resource instance. The inventory file is generated in the following format with a single host:

aFirstHost ansible_host=<ip address of the host> ansible_connection-ssh

For each group, additional ini section will be added, where each section is:

[groupName]
aFirstHost ansible_host=<ip address of the host> ansible_connection-ssh

For a host list ["someHost"] and a group list of ["group1", "group2"], the inventory would be:

someHost ansible_host=<ip> ansible_connection-ssh

[group1]
someHost ansible_host=<ip> ansible_connection-ssh

[group2]
someHost ansible_host=<ip> ansible_connection-ssh

If hosts is an empty list or not given, the resulting generated inventory is:

<ip> ansible_connection-ssh

[group1]
<ip> ansible_connection-ssh

[group2]
<ip> ansible_connection-ssh

Null_resource local provisioner: hosts and groups

The plays.hosts and defaults.hosts can be used with local provisioner on a null_resource. All passed hosts are used when generating the inventory file. The inventory file is generated in the following format:

<firstHost IP> 
<secondHost IP>

For each group, additional ini section will be added, where each section is:

[groupName]
<firstHost IP> 
<secondHost IP>

For a host list ["firstHost IP", "secondHost IP"] and a group list of ["group1", "group2"], the inventory would be:

<firstHost IP> 
<secondHost IP>

[group1]
<firstHost IP> 
<secondHost IP>

[group2]
<firstHost IP> 
<secondHost IP>

Remote provisioner: running on hosts created by Terraform

Remote provisioner can be enabled by adding remote {} resource to the provisioner resource.

resource "aws_instance" "ansible_test" {
  # ...
  connection {
    user = "centos"
    private_key = "${file("${path.module}/keys/centos.pem")}"
  }
  provisioner "ansible" {
    plays {
      # ...
    }

    # enable remote provisioner
    remote {}

  }
}

Unless remote.skip_install = true, the provisioner will install Ansible on the bootstrapped machine. Next, a temporary inventory file is created and uploaded to the host, any playbooks, roles, Vault password files are uploaded to the host.

Remote provisioning works with a Linux target host only.

Supported Ansible repository layouts

This provisioner supports two main repository layouts.

  1. Roles nested under the playbook directory:

    .
    ├── install-tree.yml
    └── roles
        └── tree
            └── tasks
                └── main.yml
  2. Roles and playbooks directories separate:

    .
    ├── playbooks
    │   └── install-tree.yml
    └── roles
        └── tree
            └── tasks
                └── main.yml

In the first case, to reference the roles, it is necessary to use plays.playbook.roles_path attribute:

    plays {
      playbook {
        file_path = ".../playbooks/install-tree.yml"
        roles_path = [
            ".../ansible-data/roles"
        ]
      }
    }

In the second case, it is sufficient to use only the plays.playbook.file_path, roles are nested, thus available to Ansible:

    plays {
      playbook {
        file_path = ".../playbooks/install-tree.yml"
      }
    }

Remote provisioning directory upload

A remark regardng remote provisioning. Remote provisioner must upload referenced playbooks and role paths to the remote server. In case of a playbook, the complete parent directory of the YAML file will be uploaded. Remote provisioner attempts to deduplicate uploads, if multiple plays reference the same playbook, the playbook will be uploaded only once. This is achieved by generating an MD5 hash of the absolute path to the playbook's parent directory and storing your playbooks at ${remote.bootstrap_direcotry}/${md5-hash} on the remote server.

For the roles path, the complete directory as referenced in roles_path will be uploaded to the remote server. Same deduplication method applies but the MD5 hash is the roles_path itself.

Tests

Integration tests require ansible and ansible-playbook on the $PATH. To run tests:

make test-verbose

Creating releases

To cut a release, run:

curl -sL https://raw.githubusercontent.com/radekg/git-release/master/git-release --output /tmp/git-release
chmod +x /tmp/git-release
/tmp/git-release --repository-path=$GOPATH/src/github.com/radekg/terraform-provisioner-ansible
rm -rf /tmp/git-release

After the release is cut, build the binaries for the release:

git checkout v${RELEASE_VERSION}
./bin/build-release-binaries.sh

Handle Docker image:

git checkout v${RELEASE_VERSION}
docker build --build-arg TAP_VERSION=$(cat .version) -t radekg/terraform-ansible:$(cat .version) .
docker login --username=radekg
docker tag radekg/terraform-ansible:$(cat .version) radekg/terraform-ansible:latest
docker push radekg/terraform-ansible:$(cat .version)
docker push radekg/terraform-ansible:latest

Note that the version is hardcoded in the Dockerfile. You may wish to update it after release.