Open ribbonhood opened 9 months ago
Voting for Prioritization
Volunteering to Work on This Issue
After some tinkering it appears the issue is related to having logging_configuration
with level
not explicitly set.
logging_configuration {
include_execution_data = false
}
When no default is set for level
, there's a bug that tries to recreate the state machine and in turn I get this error. Explicitly adding level=OFF
doesn't recreate the sate machine and updates work as expected.
logging_configuration {
level = "OFF"
include_execution_data = false
}
I'll leave this open as it may be an actual bug that needs to be looked into.
I've had a similar issue which seemed to be caused by not setting kms_data_key_reuse_period_second
in encryption_configuration
.
Every apply would do an update in place to set the value from 300 (the default) to null and more often than not would produce the same eventual consistency error. It was also updating the version (with publish=true), which stopped happening after adding the encryption setting.
Terraform Core Version
1.6.5
AWS Provider Version
5.29.0
Affected Resource(s)
aws_sfn_state_machine
Expected Behavior
State machine version is updated and pointed to the new alias
Actual Behavior
State machine update times out and fails.
Relevant Error/Panic Output Snippet
Terraform Configuration Files
DEFINITION
ROLE
Steps to Reproduce
Run terraform apply to create the resources Run terraform apply again, even without making any changes and the update fails.
Debug Output