Ansible with Terraform 0.13.x - remote
and local
provisioners.
The purpose of the provisioner is to provide an easy method for running Ansible to configure hosts created with Terraform.
This provisioner, however, is not designed to handle all possible Ansible use cases. Lets consider what's possible and what's not possible with this provisioner.
For after provisioning, you may find the following Ansible module useful if you use AWS S3 for state storage: terraform-state-ansible-module.
compute resource local provisioner
resource
count
is used with the compute resource and is greater than 1, the provisioner runs after each resource instance is created, passing the host information for that instance only. ansible_connection=ssh
, the host alias
is resolved from the resource.connection
attribute, it is possible to specify an ansible_host
using plays.hosts
compute resource remote provisioner
resource
resource
ansible_connection=local
hosts
, each host
will be included in every group
provided with groups
but each of them will use ansible_connection=local
null_resource local provisioner
plays.hosts
attribute. The host group is defined by plays.groups
. plays.hosts
, triggered by the availability of the interpolated vars.The provisioner by no means attempts to implement all Ansible use cases. The provisioner is not intended to be used as a jump host
. For example, the remote mode
does not allow provisioning hosts other than the one where Ansible is executed. The number of use cases and possibilities covered by Ansible is so wide that having to strive for full support is a huge undertaking for one person. Using the provisioner with a null_resource provides further options for passing the Ansible inventory, including dynamic inventory, to meet use cases not addressed when used with a compute resource.
If you find yourself in need of executing Ansible against well specified, complex inventories, either follow the regular process of provisoning hosts via Terraform and executing Ansible against them as a separate step, or initate the Ansible execution as the last Terraform task using null_resource and depends_on. Of course, pull requests are always welcomed!
$ cd /my-terraform-project
$ docker run -it --rm -v $PWD:$PWD -w $PWD radekg/terraform-ansible:latest init
$ docker run -it --rm -v $PWD:$PWD -w $PWD radekg/terraform-ansible:latest apply
Note that although terraform-provisioner-ansible
is in the terraform registry, it cannot be installed using a module
terraform stanza, as such a configuration will not cause terraform to download the terraform-provisioner-ansible
binary.
Prebuilt releases are available on GitHub. Download a release for the version you require and place it in ~/.terraform.d/plugins
directory, as documented here.
Caution: you will need to rename the file to match the pattern recognized by Terraform: terraform-provisioner-ansible_v<version>
.
Alternatively, you can download and deploy an existing release using the following script:
curl -sL \
https://raw.githubusercontent.com/radekg/terraform-provisioner-ansible/master/bin/deploy-release.sh \
--output /tmp/deploy-release.sh
chmod +x /tmp/deploy-release.sh
/tmp/deploy-release.sh -v <version number>
rm -rf /tmp/deploy-release.sh
Example:
resource "aws_instance" "test_box" {
# ...
connection {
host = "..."
user = "centos"
}
provisioner "ansible" {
plays {
playbook {
file_path = "/path/to/playbook/file.yml"
roles_path = ["/path1", "/path2"]
force_handlers = false
skip_tags = ["list", "of", "tags", "to", "skip"]
start_at_task = "task-name"
tags = ["list", "of", "tags"]
}
# shared attributes
enabled = true
hosts = ["zookeeper"]
groups = ["consensus"]
become = false
become_method = "sudo"
become_user = "root"
diff = false
extra_vars = {
extra = {
variables = {
to = "pass"
}
}
}
forks = 5
inventory_file = "/optional/inventory/file/path"
limit = "limit"
vault_id = ["/vault/password/file/path"]
verbose = false
}
plays {
module {
module = "module-name"
args = {
"arbitrary" = "arguments"
}
background = 0
host_pattern = "string host pattern"
one_line = false
poll = 15
}
# shared attributes
# enabled = ...
# ...
}
plays {
galaxy_install {
force = false
server = "https://optional.api.server"
ignore_certs = false
ignore_errors = false
keep_scm_meta = false
no_deps = false
role_file = "/path/to/role/file"
roles_path = "/optional/path/to/the/directory/containing/your/roles"
verbose = false
}
# shared attributes other than:
# enabled = ...
# are NOT taken into consideration for galaxy_install
}
defaults {
hosts = ["eu-central-1"]
groups = ["platform"]
become_method = "sudo"
become_user = "root"
extra_vars = {
extra = {
variables = {
to = "pass"
}
}
}
forks = 5
inventory_file = "/optional/inventory/file/path"
limit = "limit"
vault_id = ["/vault/password/file/path"]
}
ansible_ssh_settings {
connect_timeout_seconds = 10
connection_attempts = 10
ssh_keyscan_timeout = 60
insecure_no_strict_host_key_checking = false
insecure_bastion_no_strict_host_key_checking = false
user_known_hosts_file = ""
bastion_user_known_hosts_file = ""
}
remote {
use_sudo = true
skip_install = false
skip_cleanup = false
install_version = ""
local_installer_path = ""
remote_installer_directory = "/tmp"
bootstrap_directory = "/tmp"
}
}
}
resource "aws_instance" "test_box" {
# ...
}
resource "null_resource" "test_box" {
depends_on = "aws_instance.test_box"
connection {
host = "${aws_instance.test_box.0.public_ip}"
private_key = "${file("./test_box")}"
}
provisioner "ansible" {
plays {
playbook {
file_path = "/path/to/playbook/file.yml"
roles_path = ["/path1", "/path2"]
force_handlers = false
skip_tags = ["list", "of", "tags", "to", "skip"]
start_at_task = "task-name"
tags = ["list", "of", "tags"]
}
hosts = ["aws_instance.test_box.*.public_ip"]
groups = ["consensus"]
}
}
}
Each plays
must contain exactly one playbook
or module
. Define multiple plays
when more than one Ansible action shall be executed against a host.
plays.playbook.file_path
: full path to the playbook YAML file; remote provisioning: a complete parent directory will be uploaded to the hostplays.playbook.roles_path
: ansible-playbook --roles-path
, list of full paths to directories containing your roles; remote provisioning: all directories will be uploaded to the host; string list, default empty list
(not applies)plays.playbook.force_handlers
: ansible-playbook --force-handlers
, boolean, default false
plays.playbook.skip_tags
: ansible-playbook --skip-tags
, string list, default empty list
(not applied)plays.playbook.start_at_task
: ansible-playbook --start-at-task
, string, default empty string
(not applied)plays.playbook.tags
: ansible-playbook --tags
, string list, default empty list
(not applied)plays.module.args
: ansible --args
, map, default empty map
(not applied); values of type list and map will be converted to strings using %+v
, avoid using those unless you really know what you are doingplays.module.background
: ansible --background
, int, default 0
(not applied)plays.module.host_pattern
: ansible <host-pattern>
, string, default all
plays.module.one_line
: ansible --one-line
, boolean , default false
(not applied)plays.module.poll
: ansible --poll
, int, default 15
(applied only when background > 0
)play.galaxy_install.force
: ansible-galaxy install --force
, bool, force overwriting an existing role, default false
play.galaxy_install.ignore_certs
: ansible-galaxy --ignore-certs
, bool, ignore SSL certificate validation errors, default false
play.galaxy_install.ignore_errors
: ansible-galaxy install --ignore-errors
, bool, ignore errors and continue with the next specified role, default false
play.galaxy_install.keep_scm_meta
: ansible-galaxy install --keep-scm-meta
, bool, use tar instead of the scm archive option when packaging the role, default false
play.galaxy_install.no_deps
: ansible-galaxy install --no-deps
, bool, don't download roles listed as dependencies, default false
play.galaxy_install.role_file
: ansible-galaxy install --role-file
, string, required full path to the requirements fileplay.galaxy_install.roles_path
: ansible-galaxy install --roles-path
, string, the path to the directory containing your roles, the default is the roles_path configured in your ansible.cfgfile
(/etc/ansible/roles
if not configured); for the remote provisioner: if the path starts with filesystem path separator
, the bootstrap directory will not be prepended, if the path does not start with filesystem path separator
, the path will appended to the bootstrap directory, if the value is empty, the default value of galaxy-roles
is usedplay.galaxy_install.server
: ansible-galaxy install --server
, string, optional API serverplay.galaxy_install.verbose
: ansible-galaxy --verbose
, bool, verbose mode, default false
plays.hosts
: list of hosts to include in auto-generated inventory file when inventory_file
not given, string list, default empty list
; When used with null_resource this can be an interpolated list of host IP address public or private; more details belowplays.groups
: list of groups to include in auto-generated inventory file when inventory_file
not given, string list, default empty list
; more details belowplays.enabled
: boolean, default true
; set to false
to skip executionplays.become
: ansible[-playbook] --become
, boolean, default false
(not applied)plays.become_method
: ansible[-playbook] --become-method
, string, default sudo
, only takes effect when become = true
plays.become_user
: ansible[-playbook] --become-user
, string, default root
, only takes effect when become = true
plays.diff
: ansible[-playbook] --diff
, boolean, default false
(not applied)plays.extra_vars
: ansible[-playbook] --extra-vars
, map, default empty map
(not applied); will be serialized to a JSON string, supports values of different types, including lists and mapsplays.forks
: ansible[-playbook] --forks
, int, default 5
plays.inventory_file
: full path to an inventory file, ansible[-playbook] --inventory-file
, string, default empty string
; if inventory_file
attribute is not given or empty, a temporary inventory using hosts
and groups
will be generated; when specified, hosts
and groups
are not in useplays.limit
: ansible[-playbook] --limit
, string, default empty string
(not applied)plays.vault_id
: ansible[-playbook] --vault-id
, list of full paths to vault password files; remote provisioning: files will be uploaded to the server, string list, default empty list
(not applied); takes precedence over plays.vault_password_file
plays.vault_password_file
: ansible[-playbook] --vault-password-file
, full path to the vault password file; remote provisioning: file will be uploaded to the server, string, default empty string
(not applied)plays.verbose
: ansible[-playbook] --verbose
, boolean, default false
(not applied)Some of the plays
settings might be common across multiple plays
. Such settings can be provided using the defaults
attribute. Any setting from the following list can be specified in defaults:
defaults.hosts
defaults.groups
defaults.become_method
defaults.become_user
defaults.extra_vars
defaults.forks
defaults.inventory_file
defaults.limit
defaults.vault_id
defaults.vault_password_file
None of the boolean attributes can be specified in defaults
. Neither playbook
nor module
can be specified in defaults
.
ansible_ssh_settings.connect_timeout_seconds
: SSH ConnectTimeout
, default 10
secondsansible_ssh_settings.connection_attempts
: SSH ConnectionAttempts
, default 10
ansible_ssh_settings.ssh_keyscan_timeout
: when ssh-keyscan
is used, how long to try fetching the host key until failing, default 60
secondsFollowing settings apply to local provisioning
only:
ansible_ssh_settings.insecure_no_strict_host_key_checking
: if true
, host key checking will be disabled when connecting to the target host, default false
; when connecting via bastion, bastion will not execute any SSH keyscanansible_ssh_settings.insecure_bastion_no_strict_host_key_checking
: if true
, host key checking will be disabled when connecting to the bastion host, default false
ansible_ssh_settings.user_known_hosts_file
: used only when ansible_ssh_settings.insecure_no_strict_host_key_checking=false
; if set, the provided path will be used instead of an auto-generate known hosts file; when executing via bastion host, it allows the administrator to provide a known hosts file, no SSH keyscan will be executed on the bastion; default empty string
ansible_ssh_settings.bastion_user_known_hosts_file
: used only when ansible_ssh_settings.insecure_bastion_no_strict_host_key_checking=false
; if set, the provided path will be used instead of an auto-generate known hosts fileThe existence of this resource enables remote provisioning
. To use remote provisioner with its default settings, simply add remote {}
to your provisioner.
remote.use_sudo
: should sudo
be used for bootstrap commands, boolean, default true
, become
does not make much sense; this attribute has no relevance to Ansible --sudo
flagremote.skip_install
: if set to true
, Ansible installation on the server will be skipped, assume Ansible is already installed, boolean, default false
remote.skip_cleanup
: if set to true
, Ansible bootstrap data will be left on the server after bootstrap, boolean, default false
remote.install_version
: Ansible version to install when skip_install = false
and default installer is in ude, string, default empty string
(latest version available in respective repositories)remote.local_installer_path
: full path to the custom Ansible installer on the local machine, used when skip_install = false
, string, default empty string
; when empty and skip_install = false
, the default installer is usedremote.remote_installer_directory
: full path to the remote directory where custom Ansible installer will be deployed to and executed from, used when skip_install = false
, string, default /tmp
; any intermediate directories will be created; the program will be executed with sh
, use shebang if program requires a non-shell interpreter; the installer will be saved as tf-ansible-installer
under the given directory; for /tmp
, the path will be /tmp/tf-ansible-installer
remote.bootstrap_directory
: full path to the remote directory where playbooks, roles, password files and such will be uploaded to, used when skip_install = false
, string, default /tmp
; the final directory will have tf-ansible-bootstrap
appended to it; for /tmp
, the directory will be /tmp/tf-ansible-bootstrap
The provisioner does not support passwords. It is possible to add password support for:
However, local provisioner with bastion currently rely on executing an Ansible command with SSH -o ProxyCommand
, this would require putting the password on the terminal. For consistency, consider no password support.
Local provisioner requires the resource.connection
with, at least, the user
defined. After the bootstrap, the plugin will inspect the connection info, check if the user
and private_key
are set and that provisioning succeeded, indeed, by checking the host (which should be an ip address of the newly created instance). If the connection info does not provide the SSH private key, ssh agent
mode is assumed.
In the process of doing so, a temporary inventory will be created for the newly created host, the pem file will be written to a temp file and a temporary known_hosts
file will be created. Temporary known_hosts
and temporary pem are per provisioner run, inventory is created for each plays
. Files are cleaned up after the provisioner finishes or fails. Inventory will be removed only if not supplied with inventory_file
.
Because the provisioner executes SSH commands outside of itself, via Ansible command line tools, the provisioner must construct a temporary SSH known_hosts
file to feed to Ansible. There are two possible scenarios.
connection.host_key
is used, the provisioner will use the provided host key to construct the temporary known_hosts
file.connection.host_key
is not given or empty, the provisioner will attempt a connection to the host and retrieve first host key returned during the handshake (similar to ssh-keyscan
but using Golang SSH).This is a little bit more involved than the previous case.
connection.bastion_host_key
is provided, the provisioner will use the provided bastion host key for the known_hosts
file.connection.bastion_host_key
is not given or empty, the provisioner will attempt a connection to the bastion host and retrieve first host key returned during the handshake (similar to ssh-keyscan
but using Golang SSH).However, Ansible must know the host key of the target host where the bootstrap actually happens. If connection.host_key
is provided, the provisioner will simply use the provieded value. But, if no connection.host_key
is given (or empty), the provisioner will open an SSH connection to the bastion host and perform an ssh-keyscan
operation against the target host on the bastion host.
In the ssh-keyscan
case, the bastion host must:
bastion_host_key
is used:
cat
, echo
, grep
, mkdir
, rm
, ssh-keyscan
commands available on the $PATH
for the SSH user
$HOME
enviornment variable set for the SSH user
The plays.hosts
and defaults.hosts
attributes can be used with local provisioner. When used with a compute resource only the first defined host will be used when generating the inventory file and additional hosts will be ignored. If plays.hosts
or defaults.hosts
is not specified, the provisioner uses the public IP address of the Terraform provisioned resource instance. The inventory file is generated in the following format with a single host:
aFirstHost ansible_host=<ip address of the host> ansible_connection-ssh
For each group, additional ini section will be added, where each section is:
[groupName]
aFirstHost ansible_host=<ip address of the host> ansible_connection-ssh
For a host list ["someHost"]
and a group list of ["group1", "group2"]
, the inventory would be:
someHost ansible_host=<ip> ansible_connection-ssh
[group1]
someHost ansible_host=<ip> ansible_connection-ssh
[group2]
someHost ansible_host=<ip> ansible_connection-ssh
If hosts
is an empty list or not given, the resulting generated inventory is:
<ip> ansible_connection-ssh
[group1]
<ip> ansible_connection-ssh
[group2]
<ip> ansible_connection-ssh
The plays.hosts
and defaults.hosts
can be used with local provisioner on a null_resource. All passed hosts are used when generating the inventory file. The inventory file is generated in the following format:
<firstHost IP>
<secondHost IP>
For each group, additional ini section will be added, where each section is:
[groupName]
<firstHost IP>
<secondHost IP>
For a host list ["firstHost IP", "secondHost IP"]
and a group list of ["group1", "group2"]
, the inventory would be:
<firstHost IP>
<secondHost IP>
[group1]
<firstHost IP>
<secondHost IP>
[group2]
<firstHost IP>
<secondHost IP>
Remote provisioner can be enabled by adding remote {}
resource to the provisioner
resource.
resource "aws_instance" "ansible_test" {
# ...
connection {
user = "centos"
private_key = "${file("${path.module}/keys/centos.pem")}"
}
provisioner "ansible" {
plays {
# ...
}
# enable remote provisioner
remote {}
}
}
Unless remote.skip_install = true
, the provisioner will install Ansible on the bootstrapped machine. Next, a temporary inventory file is created and uploaded to the host, any playbooks, roles, Vault password files are uploaded to the host.
Remote provisioning works with a Linux target host only.
This provisioner supports two main repository layouts.
Roles nested under the playbook directory:
.
├── install-tree.yml
└── roles
└── tree
└── tasks
└── main.yml
Roles and playbooks directories separate:
.
├── playbooks
│ └── install-tree.yml
└── roles
└── tree
└── tasks
└── main.yml
In the first case, to reference the roles, it is necessary to use plays.playbook.roles_path
attribute:
plays {
playbook {
file_path = ".../playbooks/install-tree.yml"
roles_path = [
".../ansible-data/roles"
]
}
}
In the second case, it is sufficient to use only the plays.playbook.file_path
, roles are nested, thus available to Ansible:
plays {
playbook {
file_path = ".../playbooks/install-tree.yml"
}
}
A remark regardng remote provisioning. Remote provisioner must upload referenced playbooks and role paths to the remote server. In case of a playbook, the complete parent directory of the YAML file will be uploaded. Remote provisioner attempts to deduplicate uploads, if multiple plays
reference the same playbook, the playbook will be uploaded only once. This is achieved by generating an MD5 hash of the absolute path to the playbook's parent directory and storing your playbooks at ${remote.bootstrap_direcotry}/${md5-hash}
on the remote server.
For the roles path, the complete directory as referenced in roles_path
will be uploaded to the remote server. Same deduplication method applies but the MD5 hash is the roles_path
itself.
Integration tests require ansible
and ansible-playbook
on the $PATH
. To run tests:
make test-verbose
To cut a release, run:
curl -sL https://raw.githubusercontent.com/radekg/git-release/master/git-release --output /tmp/git-release
chmod +x /tmp/git-release
/tmp/git-release --repository-path=$GOPATH/src/github.com/radekg/terraform-provisioner-ansible
rm -rf /tmp/git-release
After the release is cut, build the binaries for the release:
git checkout v${RELEASE_VERSION}
./bin/build-release-binaries.sh
Handle Docker image:
git checkout v${RELEASE_VERSION}
docker build --build-arg TAP_VERSION=$(cat .version) -t radekg/terraform-ansible:$(cat .version) .
docker login --username=radekg
docker tag radekg/terraform-ansible:$(cat .version) radekg/terraform-ansible:latest
docker push radekg/terraform-ansible:$(cat .version)
docker push radekg/terraform-ansible:latest
Note that the version is hardcoded in the Dockerfile. You may wish to update it after release.