Closed tiagoposse closed 2 months ago
💥 This pull request now has conflicts. Could you fix it @tiagoposse? 🙏
conflict was fixed and requested changes implemented
anything missing?
/terratest
@tiagoposse this is annoying and stupid, but can you add a description to all variables in the examples/complete
directory?
Running input-descriptions in /__w/terraform-aws-tfstate-backend/terraform-aws-tfstate-backend/examples/complete
1..1
Test: check if terraform inputs have descriptions
File: input-descriptions.bats
---------------------------------
region is missing a description
---------------------------------
not ok 1 check if terraform inputs have descriptions
@tiagoposse Thank you for this PR.
I am, however, not inclined to accept it, because bucket keys add complexity, reduce auditability, and for the kind of usage Terraform makes of the S3 bucket, makes practically no difference in cost or availability. IIRC, Terraform makes 1 object read and 1 object write for each plan or apply. Maybe I'm wrong and it makes 4 reads and 4 writes due to locks and intermediate steps, for a total of 8 KMS operations. Then maybe, if you are using Cloud Posse modules that cause reads of the S3-stored state of other root modules, we can get up to 20.
In the biggest usage I have seen, a company runs about 1,600 terraform plan
operations a night for drift detection. If those used 20 KMS operations each, that gives us 32,000 KMS operations. At current pricing of $0.03 per 10,000 requests, that is less than $0.10. And that is only if you are using a KMS key. If you are using basic Server-Side Encryption (SSE-S3) you do not even have to pay that.
In exchange for saving $0.10, you lose CloudTrail access logs and the ability to scope KMS permissions to the object level, because when using bucket keys, those things happen at the bucket level. See Changes to note before enabling an S3 Bucket Key for more details.
The real use case for bucket keys is when you are storing something like a data lake with millions of Parquet files and uploading large numbers of files in a batch, to the point where you are running into the KMS API rate limits (10,000 to 100,000 per second). By contrast, Terraform uses so few operations that, in my opinion, a bucket key is just not an option worth supporting. In fact, having it as an option seems counter-productive because it then becomes another thing people have to think about and make a decision about.
Thank you for this @Nuru. Your points make sense and I do agree that losing access logs is not worth it to save 10c. This PR was purely to offer the option for smaller use cases (we for example use this in buckets hosting states for the development environment). But I see how this can increase complexity and create confusion. If the final decision is to not accept it, than so be it. If the final decision is to accept it, I'll happily perform the requested changes. Let me know once this is decided :)
what
Allow the user to enable the usage of a bucket key when using server side encryption
why
Helps reduce costs when using s3 backend
references