Open nathanielks opened 9 years ago
+1 to this for the same reason: multiple environments managed by the same configs. We also considered writing a wrapper to do something like this. I saw this before, and it looked promising: https://github.com/mozilla/socorro-infra/blob/master/terraform/wrapper.sh
ooh, thanks for the heads up, @mlrobinson!
Here's what I whipped up: https://gist.github.com/nathanielks/5bd4de708e831bbc170f, albeit for S3 only atm.
@mlrobinson my script has been updated and improved since I posted it, might want to give it a looksee.
I've been dealing with this issue a bit differently.
When I have a terraform config that actually manages multiple different states, I check out the code for that root module and then use terraform init
to copy it somewhere else and configure it for remote state:
mkdir /path/to/instancedir
cd /path/to/instancedir
terraform init /path/to/config -backend="s3" -backend-config="etc,etc"
terraform plan # etc
Since the state is saved remotely, I then just delete the instance directory. If I need to make modifications, I just repeat with the same remote config to seed my copy with the correct remote state.
I've been considering wrapping a script around this but so far I've not got around to it. I expect I'll write one as I continue to create more and more distinct instances of this config (I'm using it to create development environments for a bunch of different developers, so this set will grow) but for now I just wanted to share that in case someone else is inspired to wrap a helper script around it.
@apparentlymart interesting strategy. You could even write it to /tmp
as it's not really going to stick around for a while.
Very interesting. Any downside to not caching the remote state locally, and having to download it whenever you swap deployments? Other than having to save the s3 configs for each deployment separately to restore with, and the brief wait to download things, I'm not thinking there is one. You can delete it when done with a set of operations, or let it persist for sometime as a history of what happened. I think I'm liking this approach.
@nathanielks I have indeed been creating them in /tmp
:)
I expect if I wrote a wrapper script around it I'd make a random dir under /tmp
, init the module into it, and then launch a child shell with the cwd set to that temporary directory so I can run whatever terraform commands I want to run. When that child shell exits, delete the directory. That is essentially what I've been doing manually, aside from launching the child shell.
@apparentlymart I'm smelling what you're stepping in
Using the idea proposed by @apparentlymart, I've been using the following script:
#!/bin/sh
readonly env=$1
readonly selfPath=$(cd $(dirname $0); pwd -P)
readonly origPath=$(pwd)
readonly tmpPath="/tmp/$env-$(date +%s)"
[ "$1" != "staging" ] && [ "$1" != "production" ] && {
echo "Unknown environment: \"$1\", must be one of \"production\" or \"staging\"."
exit 1
}
mkdir -p $tmpPath
cd $tmpPath
terraform init -backend=atlas -backend-config="name=blendle/k8s-$env" $selfPath/../terraform
echo "atlas { name = \"blendle/k8s-$env\" }" > atlas.tf
terraform remote pull
if [ -n "$SHELL" ]; then eval $SHELL; else bash; fi
cd $origPath
rm -r $tmpPath
$ bin/tf production
# Initialized blank state with remote state enabled!
# Local and remote state in sync
$ terraform plan
# Refreshing Terraform state prior to plan...
#
# google_container_cluster.blendle: Refreshing state... (ID: blendle)
#
# No changes. Infrastructure is up-to-date. This means that Terraform
# could not detect any differences between your configuration and
# the real physical resources that exist. As a result, Terraform
# doesn't need to do anything.
$ exit
it's not perfect, but it works. Still though, native environment integration in Terraform would be nice!
+1 - There's no reason why terraform couldn't support named remote config cache files. By forcing all remote state cache files to be .terraform/terraform.tfstat, we are effectively prohibited from managing multiple identical environments with the same code simply because we want to store that state remotely and share it with others. If we choose to keep that state local it's a simple matter of specifying -state=file and -var-file=file and suddenly we can manage any number of mostly identical, but separate environments without a problem.
Something as simple as adding an option like -remote-config-cache-file=
When the terraform remote config
command is passed a -state
I expect it to update that
state file in place with the remote config elements. Currently it's moved to .terraform/terraform.tfstate
.
When a -state
file in an apply
or plan
command has a remote config I expect that remote config to be honored. Currently it's ignored.
Managing multiple environments with -state
and -var-file
arguments would work just fine
if remote state configs worked as expected.
@mmell - I agree 1000% :)
If it's of any help, here's how I'm currently managing multiple environments in AWS all with remote state. I wrote a python wrapper around terraform to manage symlinks to the "correct" remote state cache file.
Disclaimer: this is not the latest version of my working code, but it should be good enough to give you the basic idea.
https://github.com/pll/terraform_infrastructure/tree/master/modules
I independently started out with terraform a few weeks ago, and apparently got to the same results: I find myself having to set state and var-file to the current environment in a wrapper script:
-state "$ENV_ROOT/terraform.tfstate" -var-file "$ENV_ROOT/terraform.tfvars"
For the remote config on s3 I have:
-backend-config="key=xxx-$DEPLOYMENT.tfstate"
It would seem that single definitions, multiple instances with independent states and variables is a valid that boils do to setting a few paths. I could be elegantly supported if terraform were to read these kind of settings from the environment directly.
If so people could do away with wrappers entirely.
PS I would consider what @mmell said - remote config
is not honoring -state
- like a bug.
If so people could do away with wrappers entirely.
@soulrebel - Wouldn't that be nice? There are so many examples of low-hanging fruit that could eliminate the need for wrappers entirely. The two biggest ones being:
+1 Simply having support for -state and -backup apply to remote state would solve our issues as well. We have many deployments that we have seperate terraform configs for and want to keep isolated for reason I think a very important, i.e. staging deployments should never play in production environments.
@pll
An option for choosing aws credentials based on profile names in ~/.aws/credentials
I just have this in my provider.tf
provider "aws" {
region = "us-east-1"
shared_credentials_file = "~/.aws/credentials"
profile = "<aws_creds_profile_name>"
}
As described here : https://www.terraform.io/docs/providers/aws/index.html
in a section called - "Shared Credentials file"
+1 This is what I expected the -state to do. Strange that it doesn't.
I am having the same issue. I want to be able to run multiple TF plans/applies simultaneously, and without having the '-state' outputting the state to a unique name, it's not possible without doing something nasty.
To work around this issue, we did something "nasty" as noted above. Creating symlinks to state files... It has worked for us to date ,but I would rather not have to do this.
Something like this....
# Note: Remote state does not support -state options so we need to cheat
# and symlink to the proper state file. Terraform expects the state to be relative to where it is
# run from i.e. stack directory/.terraform
# Remove the link if it exists
[ -e "${terraform_stack_dir}/.terraform" ] && rm -f "${terraform_stack_dir}/.terraform"
# Create the cheater link
ln -fs "${terraform_deployment_dir}/.terraform" "${terraform_stack_dir}/.terraform" || { echo "Unable to create link" ; return 1 ;}
echo "-- Cheating on remote state... Linked '${terraform_stack_dir}/.terraform' -> '$(readlink "${terraform_stack_dir}/.terraform")'"
# The -state and -backup options do not work for the remote config command
# they simply get added to the CWD under .terraform so go there first.
cd "${terraform_deployment_dir}" || return 1
echo "-- Setting up remote state..."
terraform remote config -backend=S3\
-backend-config="bucket=${terraform_deployment_state_bucket}" \
-backend-config="key=${terraform_deployment_state_key}" \
-backend-config="access_key=${terraform_aws_access_key_id}" \
-backend-config="secret_key=${terraform_aws_secret_access_key}" \
-backend-config="region=us-east-1"
if [[ "${?}" != 0 ]]; then
echo "-- Unable to configure remote storage, turning around..."
return 1
fi
return 0
@octalthorpe running terraform remote config ...
causes the local state file to be ignored (except of course for the remote config!). The symlink to the local state file has no effect except to store a local copy of the S3 version.
Related to #1295, I'd like to be able to manage multiple environments from one configuration. I know I can specify applicable var and state files locally, but I'm not sure it's possible to specify a specific remote config. I'd imagine the idea would be pull the desired remote config down, then run any actions on that state file with the var files for the environment I'd like to use. I'm thinking of writing a wrapper around terraform that would allow me to specify an environment that would pull the desired environment before running any actions on it. Sound about right?