Closed bpastiu closed 4 years ago
Hi @bpastiu, if you could provide any details on the panic, perhaps encrypting your log (removing things you shouldn't share with us beforehand): https://www.hashicorp.com/security.html and sharing here will help?
Without more data, it's hard to tell where we (Terraform) should be capturing the panic. Limitations related to resources are (most often) related to the resource itself, that is, your limitations may lie with in AWS (more information should be available in the panic output, or if you set TF_LOG=trace
and read the AWS responses.
We had a TF_LOG=trace and we managed to extract this before the panic:
[terragrunt] [/config/account/region/environment/api-gateway] 2020/03/04 16:30:12 Running command: terraform init -backend-config=bucket=bucket-name -backend-config=dynamodb_table=dynamo-db-table -backend-config=encrypt=true -backend-config=key=12345/environment/api-gateway/terraform.tfstate -backend-config=region=region
There is no other error in our logs.
Thanks, Bogdan Pastiu
Hi @bpastiu,
I assumed this was a panic due to the grpc payload size limit, but the lack of logs is suspicious. You must be getting the panic output in order to know it's a panic. Can you provide the traceback so we know where the error is occurring?
Hi,
I can only do that tomorrow once I get back to the office. Can you instruct on what info do you need so I won't take too much time tomorrow.
What I can tell you from the top of my head is that each module we deploy creates a folder in s3 with it's own terraform.tfstate file. For the apy gateway it tries to do that but never manages to do it. In the respective bucket the api-gateway key is missing in entirety.
Thanks for the responses.
Thanks @bpastiu,
If you can trigger the panic, the stack trace will be included in the output.
Hi. Apparently I had it saved. Here it is:
panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xd37d71]
goroutine 53 [running]: github.com/gruntwork-io/terragrunt/cli.runTerraformWithRetry(0xc0000db600, 0xc0000db600, 0xc0002f8900) /go/src/github.com/gruntwork-io/terragrunt/cli/cli_app.go:531 +0x121 github.com/gruntwork-io/terragrunt/cli.runTerragruntWithConfig.func1(0x0, 0x0) /go/src/github.com/gruntwork-io/terragrunt/cli/cli_app.go:484 +0x2a github.com/gruntwork-io/terragrunt/cli.runActionWithHooks(0xf9a250, 0x9, 0xc0000db600, 0xc0002a8140, 0xc00043bc20, 0xc0004ee500, 0xc00060a140) /go/src/github.com/gruntwork-io/terragrunt/cli/cli_app.go:495 +0x2ae github.com/gruntwork-io/terragrunt/cli.runTerragruntWithConfig(0xc0000db600, 0xc0002a8140, 0x0, 0x0, 0x0) /go/src/github.com/gruntwork-io/terragrunt/cli/cli_app.go:483 +0x2c7 github.com/gruntwork-io/terragrunt/cli.RunTerragrunt(0xc0000db600, 0xfa49a3, 0x15) /go/src/github.com/gruntwork-io/terragrunt/cli/cli_app.go:370 +0x79c github.com/gruntwork-io/terragrunt/configstack.(runningModule).runNow(0xc0005c7c20, 0x0, 0x0) /go/src/github.com/gruntwork-io/terragrunt/configstack/running_module.go:238 +0x17a github.com/gruntwork-io/terragrunt/configstack.(runningModule).runModuleWhenReady(0xc0005c7c20) /go/src/github.com/gruntwork-io/terragrunt/configstack/running_module.go:201 +0x6a github.com/gruntwork-io/terragrunt/configstack.runModules.func1(0xc00009f150, 0xc0005c7c20) /go/src/github.com/gruntwork-io/terragrunt/configstack/running_module.go:171 +0x51 created by github.com/gruntwork-io/terragrunt/configstack.runModules /go/src/github.com/gruntwork-io/terragrunt/configstack/running_module.go:169 +0xe1 Makefile:68: recipe for target 'apply' failed make: *** [apply] Error 2
What happens before the panic is in my first comment.
Thanks for the info @bpastiu,
This is a panic in terragrunt
, not in terraform itself. You will have to file an issue with them to determine what the actual problem is, as without any output from terraform we can only guess where the problem lies.
I double checked, and even if the request body were larger than 4MB, it should result in an error in terraform, which would show normal output and logs.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Hello,
We've encountered an interesting issue while deploying one of our stacks. I can't go into specifics of the configuration for obvious reasons but I'll try to explain the best I can.
We are trying to deploy a aws_api_gateway_rest_api resource with the body being a rendered openapi template file. We didn't experience any issues up until now when there were some changes done to the openapi file. The openapi file is now about 3500 lines big and is causing the panic. We tested that this is the issue using a smaller openapi file and it worked fine.
I would like to as if there is any limitation to this we should take into account going forward and how we could get it resolved.