Closed cruzanmo closed 4 years ago
Thanks for the report!
Yes, that example will need to be updated. That said, all the code is currently set to only work with Terraform 0.8.x, so I'll have to do a sweep through everything to add support for Terraform 0.9.x. PRs welcome :)
@brikis98 @cruzanmo I would be interested in helping with a few PRs. Also @cruzanmo do you have your example available?
@anash28 PRs would be very welcome!
Is there any example available now ?
@brikis98 your example in the book for configuring remote state in s3 refers to a command line command right? I'm assuming that it now needs to be a file which is run? the hashicorp website is very unclear for newbies (like me) about what I'm supposed to do with the info it provides.
@eyekelly Yes, to manage remote state with the current versions of Terraform, you define a backend configuration directly in your .tf
files. For example, to store state in the S3 backend, you could add the following to main.tf
:
terraform {
backend "s3" {
bucket = "name-of-your-s3-bucket"
region = "us-east-1"
key = "terraform.tfstate"
encrypt = true
}
}
Once you've added this, you run terraform init
, and your remote state will be configured.
thanks very much @brikis98 got it working now.
No problem.
You may also want to enable locking by adding the dynamodb_table parameter and pointing it at a DynamoDB table with a primary key called LockID
.
ok, thanks @brikis98 , I'm just working my way through your book atm. Its really helping me understand DevOps better.
That's great to hear :)
@brikis98 I have an issue. as .terragrunt has been replaced with terraform.tfvars right? I'm not sure what to do next, I'm getting this error.
[terragrunt] 2017/11/13 14:57:13 At 4:3: root.terragrunt.lock: map must have string keys
[terragrunt] 2017/11/13 14:57:13 Unable to determine underlying exit code, so Terragrunt will exit with error code 1```
Please check the Terragrunt docs for the latest! https://github.com/gruntwork-io/terragrunt
If you have questions, feel free to post in https://community.gruntwork.io/.
@brikis98 I'm now seeing in more recent versions of Terraform (0.11.3 here) that the terraform_remote_state doesn't work anymore either; I was able to update the remote state config directives in the book to use remote state stanzas instead, but much of the example code relies on capabilities that appear to be deprecated.
See https://github.com/hashicorp/terraform/issues/17197 and https://github.com/hashicorp/terraform/issues/12316 for more details - as a relative newbie here, I'm sort of at a loss for how to work around this (and thus complete chapter 4)
Would happily provide my git repo if you wanted to see what I've tried thus far.
@vaficionado What are you trying to do and what actual issue are you hitting?
So I have
data "template_file" "user_data" {
template = "${file("${path.module}/user-data.sh")}"
vars {
server_port = "${var.server_port}"
db_address = "${data.terraform_remote_state.db.address}"
db_port = "${data.terraform_remote_state.db.port}"
}
}
In the webserver-cluster module, which is being referenced from the database module outputs
output "address" {
value = "${aws_db_instance.example.address}"
}
output "port" {
value = "${aws_db_instance.example.port}"
}
Deploying the RDS instances works fine, and I can validate that the parameters are present in the shared tfstate, but running a plan against the webserver-cluster turns up the following:
Error: Error running plan: 3 error(s) occurred:
* module.webserver_cluster.output.dbserver: Resource 'data.terraform_remote_state.db' does not have attribute 'address' for variable 'data.terraform_remote_state.db.address'
* module.webserver_cluster.data.template_file.user_data: 1 error(s) occurred:
* module.webserver_cluster.data.template_file.user_data: Resource 'data.terraform_remote_state.db' does not have attribute 'port' for variable 'data.terraform_remote_state.db.port'
* module.webserver_cluster.output.dbport: Resource 'data.terraform_remote_state.db' does not have attribute 'port' for variable 'data.terraform_remote_state.db.port'
Can you show your terraform_remote_state
data source config?
And you backend "s3" { ... }
config?
Sure:
terraform_remote_state (from webserver module)
data "terraform_remote_state" "db" {
backend = "s3"
config {
bucket = "${var.db_remote_state_bucket}"
key = "${var.db_remote_state_key}"
region = "us-east-1"
encrypt = "true"
}
}
backend stanza (mysql)
terraform {
backend "s3" {
bucket = "<<redacted>>"
key = "stage/data-stores/mysql/terraform.tfstate"
region = "us-east-1"
encrypt = "true"
}
}
backend stanza (webserver)
terraform {
backend "s3" {
bucket = "<<redacted>>"
key = "stage/services/webserver-cluster/terraform.tfstate"
region = "us-east-1"
encrypt = "true"
}
}
What are "${var.db_remote_state_bucket}"
and "${var.db_remote_state_key}"
set to?
Sorry, I just realized you'd need that too simultaneously.
module "webserver_cluster" {
source = "../../../modules/services/webserver-cluster"
cluster_name = "webservers-stage"
db_remote_state_bucket = "<<<redacted, but yes it's the same as the others above>>>"
db_remote_state_key = "stage/data-stores/mysql/terraform.tfstate"
instance_type = "t2.micro"
min_size = 2
max_size = 2
}
Can you open stage/data-stores/mysql/terraform.tfstate
in your S3 bucket and paste the contents of the (non-sensitive) root outputs? Should look something like this:
{
"version": 3,
"terraform_version": "0.11.3",
"serial": 13,
"modules": [
{
"path": [
"root"
],
"outputs": {
"db_name": {
"sensitive": false,
"type": "string",
"value": "example"
},
"name": {
"sensitive": false,
"type": "string",
"value": "mysql-stage"
}
}
}
]
}
{
"version": 3,
"terraform_version": "0.11.3",
"serial": 5,
"lineage": "01bd4614-b710-489b-a878-563ae4c62877",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
},
{
"path": [
"root",
"rds_instance"
],
"outputs": {
"address": {
"sensitive": false,
"type": "string",
"value": "<<dbname.fqdn>>"
},
"port": {
"sensitive": false,
"type": "string",
"value": "3306"
}
},
"outputs": {},
Only the root outputs are available with terraform_remote_state
. Your root outputs are empty. I'm guessing that you deployed MySQL with a module
that had those outputs, but you did not "proxy" those outputs into the root folder that was deploying that module.
Edit: I just realized you linked one above, didn't notice it. I will review that doc, thank you so much for the responses!
You're probably doing something like this in your mysql
folder:
provider "aws" {
region = "us-east-1"
}
module "mysql" {
source = "../modules/mysql"
# ... other params
}
What you need to add is:
output "address" {
value = "${module.mysql.address}"
}
output "port" {
value = "${module.mysql.port}"
}
After running apply
on that, the outputs you want will be in the root of your tfstate
and accessible to other modules via terraform_remote_state
.
Thank you! This was it. I had set up an outputs.tf but didn't use the ${module.name.var} taxonomy, I had done that wrong. Now everything seems to be working well. Appreciate the help, and thanks for the great book
Am I right that there is no way to create S3 bucket (as a backend state storage) directly from Terraform? I mean, the bucket should already exist, right? In this case, I guess there is no sense to keep this resource in main.tf
if it is created manually via AWS interface or CLI.
You definitely can create an S3 bucket with Terraform: https://www.terraform.io/docs/providers/aws/r/s3_bucket.html
That said, if you're using the bucket to store state, you get a bit of a chicken-and-egg: you have to run apply
with the bucket in the code, but no backend
; then add the backend
and run init
; and then if you ever want to run destroy
, you have to do more futzing, or you'll be deleting the very bucket you're using to store the state you're deleting. So creating this bucket manually or via a tool like Terragrunt is OK.
@brikis98 Yes, that's exactly I was talking about: there is kind of mutual dependency or deadlock when one is trying to simultaneously create an S3 resource and use it as a backend. For now I just use two separate folders, like, global
and services
, and running Terraform on the global one at first to create the bucket.
The Terragrunt sound interesting, I guess I'll try the tool when getting a bit more familiar with the Terraform itself and DevOps in general.
Can confirm that this solution solved my issue, and is also present in the code in this repository (I just didn't know what to look for). This could probably be closed :)
Just found this, after 30 mins of wondering why the terraform remote config
command was being ignored.
On your next set of slides about what DevOps really looks like, try to find an image of somone trying to shove half a pound of butter up a sparrow's arse with a needle. :)
This issue was fixed in Terraform: Up & Running, 2nd edition, which came out in 2019.
The book code uses
terraform remote config
which was recently removed: https://github.com/hashicorp/terraform/blob/master/website/source/upgrade-guides/0-9.html.markdown#remote-stateNeeded to use https://www.terraform.io/docs/backends/types/s3.html
Should this example be updated? https://github.com/brikis98/terraform-up-and-running-code/tree/master/code/terraform/03-terraform-state/file-layout-example/global/s3