Open pradeep-repaka-mf opened 3 years ago
Hi @pradeep-repaka-mf 👋 Thank you for raising this and sorry you ran into trouble here.
The Provider produced inconsistent final plan
type of error seems to indicate that there is potentially a bug either with Terraform CLI or the aws_s3_bucket_object
but we will need help reproducing the issue. Ideally, this would be a self-contained configuration we can run without needing special access or inventing details, but if not can you at least please provide either the module configuration or the outputs of that module?
If possible, you may also want to see if upgrading to Terraform CLI version 0.14.7 (latest as of this writing) has the same issue since there were some operation graph changes that occurred between Terraform CLI 0.13 and 0.14.
I'm experiencing this too, it looks like a regression on #14900 and it didn't show with provider version 3.16.0
Hi @Jorge-Rodriguez can you please provide a self-contained configuration that reproduces the issue?
@bflad I'll try, but it might be hard to do so. We haven't been able to consistently reproduce the issue, we've only seen it when running Terraform via GitHub actions.
@bflad I already provided an example in the bug description. I have not given ami, vpc_id and subnet_id values, do you want to me to provide those values too?
@pradeep-repaka-mf the referenced Terraform Module appears to be private, at least to my account.
$ terraform init
Initializing modules...
Downloading git::https://github.com/gruntwork-io/terraform-aws-server.git?ref=v0.9.4 for primary_subcluster_node...
Downloading git::https://github.com/gruntwork-io/terraform-aws-server.git?ref=v0.9.4 for secondary_subcluster_node...
Error: Failed to download module
Could not download module "primary_subcluster_node" (main.tf:24) source code
from
"git::https://github.com/gruntwork-io/terraform-aws-server.git?ref=v0.9.4":
error downloading
'https://github.com/gruntwork-io/terraform-aws-server.git?ref=v0.9.4':
/usr/local/bin/git exited with 128: Cloning into
'.terraform/modules/primary_subcluster_node'...
ERROR: Repository not found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Error: Failed to download module
Could not download module "secondary_subcluster_node" (main.tf:48) source code
from
"git::https://github.com/gruntwork-io/terraform-aws-server.git?ref=v0.9.4":
error downloading
'https://github.com/gruntwork-io/terraform-aws-server.git?ref=v0.9.4':
/usr/local/bin/git exited with 128: Cloning into
'.terraform/modules/secondary_subcluster_node'...
ERROR: Repository not found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I'm also encountering this error, but with a different resource - so I'm not certain if it's the same issue.
I can't reproduce it reliably it's started popping up a few times over the last week in our automated deployments across a few workspaces/environments.
Context, if it helps:
Our workflow is that we're using Teamcity to do a tf plan, saving that plan to disk, and then immediately running a tf apply of that saved plan. (same script, build process - we don't exit)
The lambda function referenced is being set from values from an S3 Object (source_code_hash is set to the value from an S3 Object)
I have other aws_lambda_function
in this configuration which are fine, but only the two lambda functions that have a aws_lambda_function_event_invoke_config
associated also generate these errors.
data "aws_s3_bucket_object" "xxx" {
bucket = local.lambda_s3_bucket
key = "path/to/lambda/xxx.base64sha256"
}
resource "aws_lambda_function" "xxx" {
count = local.lambda_functions_deployed ? 1 : 0
source_code_hash = data.aws_s3_bucket_object.xxx.body
// rest of the properties omitted
}
resource "aws_lambda_function_event_invoke_config" "xxx" {
count = local.lambda_functions_deployed ? 1 : 0
function_name = aws_lambda_function.xxx[0].function_name
qualifier = aws_lambda_function.xxx[0].version
maximum_retry_attempts = 1
}
The error this time was:
Error: Provider produced inconsistent final plan
When expanding the plan for
aws_lambda_function_event_invoke_config.xxx[0] to include new values
learned so far during apply, provider "registry.terraform.io/hashicorp/aws"
produced an invalid new value for .qualifier: was cty.StringVal("66"), but now
cty.StringVal("67").
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Error: Provider produced inconsistent final plan
When expanding the plan for
aws_lambda_function_event_invoke_config.yyy[0] to include new
values learned so far during apply, provider
"registry.terraform.io/hashicorp/aws" produced an invalid new value for
.qualifier: was cty.StringVal("66"), but now cty.StringVal("67").
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Terraform Version 0.13.5 AWS Provider Version v3.37.0
I'm not sure if it's relevant, but that qualifier mentioned has changed each time.
.qualifier: was cty.StringVal("56"), but now cty.StringVal("57").
.qualifier: was cty.StringVal("59"), but now cty.StringVal("60").
.qualifier: was cty.StringVal("62"), but now cty.StringVal("63").
.qualifier: was cty.StringVal("66"), but now cty.StringVal("67").
I'm seeing this too in v5.7.0
of the provider and Terraform v1.2.8
. Going to disable the source_hash
until there's a workaround.
Community Note
Terraform CLI and Terraform AWS Provider Version
terraform - 0.13.5 terragrunt - 0.25.5 AWS provider - latest, even tested with v3.24.0 also.
Affected Resource(s)
Terraform Configuration Files
Example Terraform code
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
Panic Output
aws_s3_bucket_object.s3_secondary_subcluster_instance_ip_list to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/aws" produced an invalid new value for .version_id: was known, but now unknown.
This is a bug in the provider, which should be reported in the provider's own issue tracker.
Expected Behavior
Ideally when the value 'secondary_subcluster_node_count' changed from 0 to any non-zero value and then when we re-apply, the newly created nodes under the secondary subcluster private ip address need to write into "cluster/config/secondary_subcluster_instance_ip_list" file.
Actual Behavior
Instead of writing a new node private IP address into "cluster/config/secondary_subcluster_instance_ip_list" file under s3 bucket, it is throwing provider error when we do re-apply after changing the 'secondary_subcluster_node_count' value.
Steps to Reproduce
terragrunt apply
.terragrunt apply
and will throw an error.