Open bwhaley opened 4 months ago
Voting for Prioritization
Volunteering to Work on This Issue
I'm not sure what the ideal behavior is for this, but it definitely shouldn't be to set a default magic value that is extremely low. I think it should either result in an error, or it should query to find the latest value of consumed read/write capacity units and use that as the initial value.
Terraform Core Version
1.7.1
AWS Provider Version
5.44
Affected Resource(s)
aws_dynamodb_table
Expected Behavior
When changing a DynamoDB table from On Demand to Provisioned capacity, an initial value capacity value is required for reads and writes. When switching to provisioned capacity, if no value is provided, it should result in an error.
Actual Behavior
The terraform-provider-aws docs state that these are required values:
However, the docs also state:
So the guidance is to ignore changes to read/write capacity, which makes sense. However, on the initial change to
PROVISIONED
, if these are ignored, the code seems to default the value to 1. On a very busy table, this will result in throttling until autoscaling kicks in.This bit me hard today. It hurt.
Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
Steps to Reproduce
billing_mode = "PAY_PER_REQUEST"
billing_mode = "PROVISIONED"
and setread_capacity
andwrite_capacity
to some value > 1lifecycle { ignore_changes = [read_capacity, write_capacity] }
Debug Output
No response
Panic Output
No response
Important Factoids
I believe the culprit is here:
Where
provisionedThroughputMinValue
is a constant equal to 1.References
No response
Would you like to implement a fix?
None