hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.83k stars 9.17k forks source link

[Bug]: Converting DynamoDB from On Demand to Provisioned capacity while ignoring read/write capacity sets a default of 1 #38100

Open bwhaley opened 4 months ago

bwhaley commented 4 months ago

Terraform Core Version

1.7.1

AWS Provider Version

5.44

Affected Resource(s)

aws_dynamodb_table

Expected Behavior

When changing a DynamoDB table from On Demand to Provisioned capacity, an initial value capacity value is required for reads and writes. When switching to provisioned capacity, if no value is provided, it should result in an error.

Actual Behavior

The terraform-provider-aws docs state that these are required values:

read_capacity - (Optional) Number of read units for this table. If the billing_mode is PROVISIONED, this field is required.

However, the docs also state:

We recommend using lifecycle ignore_changes for read_capacity and/or write_capacity if there's autoscaling policy attached to the table.

So the guidance is to ignore changes to read/write capacity, which makes sense. However, on the initial change to PROVISIONED, if these are ignored, the code seems to default the value to 1. On a very busy table, this will result in throttling until autoscaling kicks in.

This bit me hard today. It hurt.

Relevant Error/Panic Output Snippet

No response

Terraform Configuration Files

resource "aws_dynamodb_table" "table" {
...
  billing_mode = "PROVISIONED"
  read_capacity = 1000
  write_capacity = 1000
  lifecycle {
    ignore_changes = [
      read_capacity,
      write_capacity,
    ]
  }
...
}

Steps to Reproduce

  1. Create a table with billing_mode = "PAY_PER_REQUEST"
  2. Change billing_mode = "PROVISIONED" and set read_capacity and write_capacity to some value > 1
  3. Set lifecycle { ignore_changes = [read_capacity, write_capacity] }
  4. Apply the change, and observe that the initial provisioned capacity is set to 1 for both reads and writes.

Debug Output

No response

Panic Output

No response

Important Factoids

I believe the culprit is here:

func expandProvisionedThroughputField(id string, data map[string]interface{}, key string, billingMode, oldBillingMode awstypes.BillingMode) int64 {
    v := data[key].(int)
    if v == 0 && billingMode == awstypes.BillingModeProvisioned && oldBillingMode == awstypes.BillingModePayPerRequest {
        log.Printf("[WARN] Overriding %[1]s on DynamoDB Table (%[2]s) to %[3]d. Switching from billing mode %[4]q to %[5]q without value for %[1]s. Assuming changes are being ignored.",
            key, id, provisionedThroughputMinValue, oldBillingMode, billingMode)
        v = provisionedThroughputMinValue
    }
    return int64(v)
}

Where provisionedThroughputMinValue is a constant equal to 1.

References

No response

Would you like to implement a fix?

None

github-actions[bot] commented 4 months ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue

bwhaley commented 4 months ago

I'm not sure what the ideal behavior is for this, but it definitely shouldn't be to set a default magic value that is extremely low. I think it should either result in an error, or it should query to find the latest value of consumed read/write capacity units and use that as the initial value.