Open luciahouse33 opened 5 months ago
Hi @luciahouse33!
I tried to replicate this issue but the result after a terraform apply
was Successfully without errors
.
I noticed that your environment variable "source_code_path" = "./scripts/cloud-functions/obtain_raw_data"
doesn't have the extention for the file(oject) you are looking to upload. Even if you are using content_type it is necessary to put this value on the source as you can see in the most basic example of google_storage_bucket_object
in terraform registry
Due to we don't have access to the data "archive_file" "source_code"
and the content of the obtain_raw_data
we are sharing you the official links with all the information and additionally here is the used code to replicate this issue:
provider "google" {
user_project_override = true
billing_project = "my-project"
project = "my-project"
}
terraform {
required_providers {
google = {
source = "hashicorp/google-beta"
version = "5.35.0"
}
}
}
resource "google_storage_bucket" "bucket_18618" {
name = "bucket-18618"
location = "US"
}
resource "google_storage_bucket_object" "bo_18618" {
source = "./utils/bucket_objects/index.zip"
name = "bo_18618"
bucket = google_storage_bucket.bucket_18618.name
content_type = "application/zip"
}
The index.zip file contains a index.js file with the next code:
exports.helloGET = (req, res) => {
res.status(200).send('Hello world!');
};
I suggest you to check your project configuration, environment variables, zip file content and execute the process with a simplified example to test your code after including data blocks, modules, environment variables and locals and when everything is ok you can continue with your current configuration.
If you continue having problems, share the code and the missing data with us. You can change sensitive data block with examples like: project = "my-project" org_id = 1234567890 iam_user = "my-user@my-domain.com"
I have the same issue in my code when I try to update the file. The first apply works well but when I modify the file same error. I haven't this error in the previous version of the provider. Regards
@florianmartineau follow the shared instructions or you could provide your terraform code in this or in a new ticket
Facing the same issue using the provider registry.terraform.io/hashicorp/google v5.38.0.
My automation is intermittent failing because of this. By re-executing the issue is resolved is some cases.
Regards, Rodrigo
Apparently this issue occurs randomly, several users have reported it but have not provided a code. In my case I have done it with the shared terraform registry code, but no error has been generated.
I tried reproducing the issue but it is not happening with my local setup. If someone is still facing issue, Can you please provide more logs or exact reproduction steps to start investigation? Thanks!
I am facing the same each time with google 5.38.0; two versions back it was running perfectly fine. No changes to the local config file but the plan is resulting in a different hash due to some bug. if i delete the file from the bucket then apply completes successfully but subsequent apply without doing any change fails and plan is reporting 1 file to be added yet it should remain unchanged
Below is the extracted snippet;
variable "init_files" {
default = [
"resources/config-rendered.json"
]
}
resource "local_file" "rendered-config" {
depends_on = [
data.template_file.config]
content = data.template_file.config.rendered
filename = "resources/config-rendered.json"
}
resource google_storage_bucket_object artifacts {
depends_on = [
google_storage_bucket.bucket, local_file.config
]
count = length(var.init_files)
name = "${element(var.init_files, count.index)}"
source = "resources/${basename("${element(var.init_files, count.index)}")}"
bucket = lower(var.bucket)
}
Hi @ggtisc, thanks for attempting to reproduce the issue.
The data block in my initial summary of the issue results in an archive file with the name of the output_path specified, which contains the .zip
file extension. The source directory being zipped in this case contains the following files:
The provider block is as follows:
terraform {
backend "http" {} # this is intentionally blank and configured somewhere else
required_providers {
google = {
source = "hashicorp/google"
version = "5.35.0"
}
}
}
provider "google" {
project = "my-redacted-project-id"
region = "us"
}
I also noted that there was an item in the release notes for version 5.35.0 which seems to directly correlate with this issue. There is also a post on that MR which refers to the same issue we are seeing here.
I would also like to emphasize that this is an intermittent issue. We saw this several times when using 5.35.0 and have moved to using 5.34.0, which seems to provide consistent success. Please see if you can run the initial deploy and a sequence of updates to see if you can reproduce the issue that way.
Hi @luciahouse33 @mohsinkhansymc, If you can reproduce the issue then can you please share some logs or working re-production steps? I understand it is not happening every time but I am not able to reproduce it once in my local setup so It would be good if we get some data to start investigation. Thank you!
I can consistently reproduce this error with following terraform code:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 5.35.0, < 6.0.0"
}
}
}
resource "google_storage_bucket" "bucket" {
name = "${var.prefix}-my-bucket"
project = var.project_id
location = "EU"
}
resource "local_file" "file" {
content = "test-content"
filename = "test.txt"
}
data "archive_file" "file" {
output_path = "archive.zip"
type = "zip"
source_file = local_file.file.filename
}
resource "google_storage_bucket_object" "bo_18618" {
source = data.archive_file.file.output_path
name = "testfile.zip"
bucket = google_storage_bucket.bucket.name
content_type = "application/zip"
}
And executing:
rm test.txt ; rm archive.zip; terraform apply -auto-approve
First apply works, then on each repeat of the above command I get:
Plan: 1 to add, 1 to change, 0 to destroy.
local_file.file: Creating...
local_file.file: Creation complete after 0s [id=60b62e43b6a5e292b8fdbd41e57de248605d2c27]
data.archive_file.file: Reading...
data.archive_file.file: Read complete after 0s [id=74486a94c043e7bac0c3924ea67768bdaaebf703]
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for google_storage_bucket_object.bo_18618 to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/google" produced an invalid new value for .detect_md5hash: was cty.StringVal("different hash"), but now cty.StringVal("ld9yrroe0lkvzZ6hbCHgUg==").
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
A bit different testcase for this bug:
resource "google_storage_bucket" "bucket" {
name = "${var.prefix}-my-bucket"
project = var.project_id
location = "EU"
}
data "http" "example" {
url = "https://httpstat.us/200"
}
resource "google_storage_bucket_object" "bo1" {
content = jsonencode(data.http.example.response_headers)
name = "file1"
bucket = google_storage_bucket.bucket.name
}
resource "google_storage_bucket_object" "bo2" {
content = google_storage_bucket_object.bo1.generation
name = "file2"
bucket = google_storage_bucket.bucket.name
}
First two terraform apply
succeeds, but all the following ones do fail.
This may also a bit different bug, that on second apply contents are not updated, though due to plan, the content will change. I observe the same behavior if I use md5hash
too.
Community Note
Terraform Version & Provider Version(s)
Terraform v1.5.7 on linux_amd65
Affected Resource(s)
google_storage_bucket_object
Terraform Configuration
where vars are like:
Debug Output
No response
Expected Behavior
Update can be applied
Actual Behavior
Intermittently, the following error happens
To resolve as a work around, I have commented out the resources and re-added them after an intermediate deployment.
Steps to reproduce
terraform plan
(runs successfully)terraform apply
Important Factoids
In the terraform plan, for the resource with the issue, I see the following:
I have seen this twice with the terraform provider 5.35.0. When I have downgraded, and run ~6 updates, I do not see this happen. With it being intermittent, it is hard to fully test.
References
No response
b/355702274