hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.82k stars 9.17k forks source link

Redshift Elastic Resize Not Working When Node Count Increased - Classic Resize Performed Instead #11303

Open frosty-007 opened 4 years ago

frosty-007 commented 4 years ago

Community Note

Terraform Version

Terraform v0.12.4 provider version = "~> 2.8"

Affected Resource(s)

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp

Debug Output

Panic Output

Expected Behavior

According to the API elastic resize should be the default resize method. One would expect modification of number_of_nodes that is within the conditions for an elastic resize to actually trigger an elastic resize.

Actual Behavior

Classic Resize was performed instead of the Elastic resize taking the cluster offline. Classic resize successfully completes - but not great that the cluster has been taken offline, when a few minutes of downtime was expected.

Steps to Reproduce

Increased node count from 4 - 8 terraform apply

Important Factoids

References

frosty-007 commented 4 years ago

Reszie successfully completes in AWS while terraform reports back the following.

Error: Error Modifying Redshift Cluster (analytics-tx-cluster): timeout while waiting for state to become 'available' (last state: 'resizing', timeout: 40m0s)

  on analytics.tf line 1, in resource "aws_redshift_cluster" "analytics_tx_cluster":
   1: resource "aws_redshift_cluster" "analytics_tx_cluster" {
frosty-007 commented 4 years ago

On resizing back down to 4 cluster nodes this also ran as a classic resize and this completed successfully. However I believe this should have ran as online elastic resize.

justinretzolk commented 2 years ago

Hey @frosty-007 :wave: Thank you for taking the time to file this issue, and for the additional updates. Given that there's been a number of AWS provider releases since you initially filed it, can you confirm whether you're still experiencing this behavior?