hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.6k stars 9.54k forks source link

Error refreshing state: state data in S3 does not have the expected content. Comes up when i try to spin up a new tf plan #20708

Open 00subra8 opened 5 years ago

00subra8 commented 5 years ago

I have amnually deleted my S3, .tfstate file, lambda, api gateway and cloudfront entries and am trying to spin up a new instance of all the above:

the terraform plan is fine, but once I approve:

Initializing the backend... Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. Error refreshing state: state data in S3 does not have the expected content.

This may be caused by unusually long delays in S3 processing a previous state update. Please wait for a minute or two and try again. If this problem persists, and neither S3 nor DynamoDB are experiencing an outage, you may need to manually verify the remote state and update the Digest value stored in the DynamoDB table to the following value:

Note: am not using dynamo Db Also it seems to work fine if the .tfstate file is not deleted

jbardin commented 5 years ago

Hi @00subra8,

That is somewhat unexpected if you're not using DynamicDB at all. The only place that the backend gets a digest for comparison is from DynamoDB, so we shouldn't see the error if it didn't exist. Can you provide some more detail about the backend configuration? What is the .tfstate file you mention?

00subra8 commented 5 years ago

I believe the .tfstate file stores the current state of my terraform plan. I find it in S3 bucket under my project folder. I have deleted this and torn down the whole proj - S3, api gateway, cloudfront, lambda and am trying to respawn the whole proj. This is when i get the above error.

00subra8 commented 5 years ago

ok i restored my tf state to an old version and now everything works well. The error i kept getting without tfstate was:

to manually verify the remote state and update the Digest value stored in the DynamoDB table to the following value:

so my new question is : when u teardown do we also delete this digest entry. would be very helpful if there is anyway to completely destroy via terraform and not tear down in stages from aws UI.

diogodias24 commented 4 years ago

Like as been refered here: https://github.com/hashicorp/terraform/issues/15380#issuecomment-310800265

Removing the md5 item from the already existing dynamo table solved for me.

Hope that helps! ;)

anuj-modi commented 4 years ago

I believe the exact issue is that at this line. The backend client simply logs a warning when the MD5 Put fails rather than indicating a failure anywhere. This should NOT fail silently. Since at this point the infrastructure changes have already succeeded, I think this should either be a time based retry or a message to the user the hash not being successfully pushed. Maybe both.

RahulKamboj21 commented 2 years ago

I resolved this issue by the error, what mentioned in Jenkins job itself:


I was able to resolve this issue by updating the Digest value on DynamoDB with the one provided by the error:

Error refreshing state: state data in S3 does not have the expected content.

This may be caused by unusually long delays in S3 processing a previous state update. Please wait for a minute or two and try again. If this problem persists, and neither S3 nor DynamoDB are experiencing an outage, you may need to manually verify the remote state and update the Digest value stored in the DynamoDB table to the following value: 8dcbb62b2ddf9b5daebd612fa524a7be

I have looked on the DynamoDB Item that contains the terraform.tfstate-md5 LockID, and replaced the value.


Steps: DynamoDB --> Tables --> click on your table --> Explore Items --> {{DynamoDB}}/{{tfstate_name}}.md and change the value in digest columns there and save. image

Issue resolved!!.

MiaKham commented 1 year ago

image In our case, there is no value provided for the digest value in the error message. The error is intermittent. We run the deployment in our github workflow. Any workaround for this? The only thing that seems to be working is waiting for sometime and try it again. Sometimes it takes days.

AndriiKhodyriev commented 1 year ago

image In our case, there is no value provided for the digest value in the error message. The error is intermittent. We run the deployment in our github workflow. Any workaround for this? The only thing that seems to be working is waiting for sometime and try it again. Sometimes it takes days.

got the same error today.

Tried deleting all s3 data and dynamodb empty - no change so far...

AndriiKhodyriev commented 1 year ago

In my case, I needed to quickly fix this issue for the POC. All possible solutions that I found before - did not work. I managed to fix it by changing the endpoints for backend S3

remote_state {
  backend = "s3"
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite"
  }
  config = {
    bucket = "${basename(dirname(get_terragrunt_dir()))}-companyname-terraform-state-files" // this path 
    key                   = "${path_relative_to_include()}_tfstate/terraform.tfstate" // this path 
    region                = "eu-central-1"
    encrypt               = true
    dynamodb_table        = "companyname-terraform-state-lock-table" // this path 
    session_name          = "terraform-assume"
    disable_bucket_update = true
  }
}
sreekanth3107 commented 1 year ago

It go fixed for me by removing the entry in the DynamoDB

sathya1602 commented 1 year ago

I resolved this issue by the error, what mentioned in Jenkins job itself:

I was able to resolve this issue by updating the Digest value on DynamoDB with the one provided by the error:

Error refreshing state: state data in S3 does not have the expected content.

This may be caused by unusually long delays in S3 processing a previous state update. Please wait for a minute or two and try again. If this problem persists, and neither S3 nor DynamoDB are experiencing an outage, you may need to manually verify the remote state and update the Digest value stored in the DynamoDB table to the following value: 8dcbb62b2ddf9b5daebd612fa524a7be

I have looked on the DynamoDB Item that contains the terraform.tfstate-md5 LockID, and replaced the value.

thank you , the method is working