stackitcloud / terraform-provider-stackit

The official Terraform provider for STACKIT
https://registry.terraform.io/providers/stackitcloud/stackit
Apache License 2.0
35 stars 13 forks source link

segfault in v0.16.1 #358

Closed malt3 closed 3 months ago

malt3 commented 3 months ago

I got a segfault when trying to use the loadbalancer lite on v0.16.1. Can't really tell what is causing it. It seems to happen every time for me.

│ Error: Plugin did not respond
│
│ The plugin encountered an error, and failed to respond to the plugin6.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵

Stack trace from the terraform-provider-stackit_v0.16.1 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xb42957]

goroutine 25 [running]:
github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/loadbalancer/loadbalancer.(*loadBalancerResource).Create(0xc00009a000, {0x1052b88, 0xc00060a720}, {{{{0x1057ef0, 0xc0005c3e90}, {0xdae640, 0xc00045d8c0}}, {0x105b278, 0xc0000c2320}}, {{{0x1057ef0, ...}, ...}, ...}, ...}, ...)
    github.com/stackitcloud/terraform-provider-stackit/stackit/internal/services/loadbalancer/loadbalancer/resource.go:577 +0x277
github.com/hashicorp/terraform-plugin-framework/internal/fwserver.(*Server).CreateResource(0xc00016b860, {0x1052b88, 0xc00060a720}, 0xc0004bf5e0, 0xc0004bf5b8)
    github.com/hashicorp/terraform-plugin-framework@v1.8.0/internal/fwserver/server_createresource.go:101 +0x578
github.com/hashicorp/terraform-plugin-framework/internal/fwserver.(*Server).ApplyResourceChange(0xc00016b860, {0x1052b88, 0xc00060a720}, 0xc0006a0000, 0xc0004bf6d0)
    github.com/hashicorp/terraform-plugin-framework@v1.8.0/internal/fwserver/server_applyresourcechange.go:57 +0x4aa
github.com/hashicorp/terraform-plugin-framework/internal/proto6server.(*Server).ApplyResourceChange(0xc00016b860, {0x1052b88?, 0xc00060a630?}, 0xc000614050)
    github.com/hashicorp/terraform-plugin-framework@v1.8.0/internal/proto6server/server_applyresourcechange.go:55 +0x38e
github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server.(*server).ApplyResourceChange(0xc00035a000, {0x1052b88?, 0xc000504240?}, 0xc0003c6000)
    github.com/hashicorp/terraform-plugin-go@v0.22.2/tfprotov6/tf6server/server.go:846 +0x3d0
github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6._Provider_ApplyResourceChange_Handler({0xeb8360, 0xc00035a000}, {0x1052b88, 0xc000504240}, 0xc000386700, 0x0)
    github.com/hashicorp/terraform-plugin-go@v0.22.2/tfprotov6/internal/tfplugin6/tfplugin6_grpc.pb.go:518 +0x1a6
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001af000, {0x1052b88, 0xc0005041b0}, {0x1058ea0, 0xc0003f8000}, 0xc000530000, 0xc0003471a0, 0x1659858, 0x0)
    google.golang.org/grpc@v1.63.2/server.go:1369 +0xdf8
google.golang.org/grpc.(*Server).handleStream(0xc0001af000, {0x1058ea0, 0xc0003f8000}, 0xc000530000)
    google.golang.org/grpc@v1.63.2/server.go:1780 +0xe8b
google.golang.org/grpc.(*Server).serveStreams.func2.1()
    google.golang.org/grpc@v1.63.2/server.go:1019 +0x8b
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 36
    google.golang.org/grpc@v1.63.2/server.go:1030 +0x125

Error: The terraform-provider-stackit_v0.16.1 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
DiogoFerrao commented 3 months ago

Hey @malt3, thank you for reporting this issue and we're sorry for the disturbance it may have caused.

Can you provide more details on the configuration you are using when this error occurs? (Which attributes you are providing and their values, if possible)

I will be investigating this issue and provide you with an update soon.

malt3 commented 3 months ago

This is the resource definition:

terraform {
  required_providers {
    stackit = {
      source  = "stackitcloud/stackit"
      version = "0.16.1"
    }
  }
}

resource "stackit_loadbalancer" "loadbalancer" {
  project_id = var.stackit_project_id
  name       = "${var.name}-lb"
  target_pools = [
    for portName, port in var.ports : {
      name        = "target-pool-${portName}"
      target_port = port
      targets = [
        for ip in var.member_ips : {
          display_name = "target-${portName}"
          ip           = ip
        }
      ]
      active_health_check = {
        healthy_threshold   = 10
        interval            = "3s"
        interval_jitter     = "3s"
        timeout             = "3s"
        unhealthy_threshold = 10
      }
    }
  ]
  listeners = [
    for portName, port in var.ports : {
      name        = "listener-${portName}"
      port        = port
      protocol    = "PROTOCOL_TCP"
      target_pool = "target-pool-${portName}"
    }
  ]
  networks = [
    {
      network_id = var.network_id
      role       = "ROLE_LISTENERS_AND_TARGETS"
    }
  ]
  external_address = var.external_address
}

And here is a terraform plan with all the fields:

  # module.stackit_loadbalancer[0].stackit_loadbalancer.loadbalancer will be created
  + resource "stackit_loadbalancer" "loadbalancer" {
      + external_address = "193.148.172.89"
      + id               = (known after apply)
      + listeners        = [
          + {
              + display_name = (known after apply)
              + port         = 9000
              + protocol     = "PROTOCOL_TCP"
              + target_pool  = "target-pool-bootstrapper"
            },
          + {
              + display_name = (known after apply)
              + port         = 30090
              + protocol     = "PROTOCOL_TCP"
              + target_pool  = "target-pool-join"
            },
          + {
              + display_name = (known after apply)
              + port         = 6443
              + protocol     = "PROTOCOL_TCP"
              + target_pool  = "target-pool-kubernetes"
            },
          + {
              + display_name = (known after apply)
              + port         = 9999
              + protocol     = "PROTOCOL_TCP"
              + target_pool  = "target-pool-recovery"
            },
          + {
              + display_name = (known after apply)
              + port         = 30081
              + protocol     = "PROTOCOL_TCP"
              + target_pool  = "target-pool-verify"
            },
        ]
      + name             = "demo-dbbe43ad-lb"
      + networks         = [
          + {
              + network_id = "a48c2283-1585-4201-ac1d-a34c9e3141e9"
              + role       = "ROLE_LISTENERS_AND_TARGETS"
            },
        ]
      + options          = (known after apply)
      + private_address  = (known after apply)
      + project_id       = "8a694a67-be5a-4d2f-b109-b2128a7c991c"
      + target_pools     = [
          + {
              + active_health_check = {
                  + healthy_threshold   = 10
                  + interval            = "3s"
                  + interval_jitter     = "3s"
                  + timeout             = "3s"
                  + unhealthy_threshold = 10
                }
              + name                = "target-pool-bootstrapper"
              + target_port         = 9000
              + targets             = [
                  + {
                      + display_name = "target-bootstrapper"
                      + ip           = "192.168.178.100"
                    },
                  + {
                      + display_name = "target-bootstrapper"
                      + ip           = "192.168.178.147"
                    },
                  + {
                      + display_name = "target-bootstrapper"
                      + ip           = "192.168.178.173"
                    },
                ]
            },
          + {
              + active_health_check = {
                  + healthy_threshold   = 10
                  + interval            = "3s"
                  + interval_jitter     = "3s"
                  + timeout             = "3s"
                  + unhealthy_threshold = 10
                }
              + name                = "target-pool-join"
              + target_port         = 30090
              + targets             = [
                  + {
                      + display_name = "target-join"
                      + ip           = "192.168.178.100"
                    },
                  + {
                      + display_name = "target-join"
                      + ip           = "192.168.178.147"
                    },
                  + {
                      + display_name = "target-join"
                      + ip           = "192.168.178.173"
                    },
                ]
            },
          + {
              + active_health_check = {
                  + healthy_threshold   = 10
                  + interval            = "3s"
                  + interval_jitter     = "3s"
                  + timeout             = "3s"
                  + unhealthy_threshold = 10
                }
              + name                = "target-pool-kubernetes"
              + target_port         = 6443
              + targets             = [
                  + {
                      + display_name = "target-kubernetes"
                      + ip           = "192.168.178.100"
                    },
                  + {
                      + display_name = "target-kubernetes"
                      + ip           = "192.168.178.147"
                    },
                  + {
                      + display_name = "target-kubernetes"
                      + ip           = "192.168.178.173"
                    },
                ]
            },
          + {
              + active_health_check = {
                  + healthy_threshold   = 10
                  + interval            = "3s"
                  + interval_jitter     = "3s"
                  + timeout             = "3s"
                  + unhealthy_threshold = 10
                }
              + name                = "target-pool-recovery"
              + target_port         = 9999
              + targets             = [
                  + {
                      + display_name = "target-recovery"
                      + ip           = "192.168.178.100"
                    },
                  + {
                      + display_name = "target-recovery"
                      + ip           = "192.168.178.147"
                    },
                  + {
                      + display_name = "target-recovery"
                      + ip           = "192.168.178.173"
                    },
                ]
            },
          + {
              + active_health_check = {
                  + healthy_threshold   = 10
                  + interval            = "3s"
                  + interval_jitter     = "3s"
                  + timeout             = "3s"
                  + unhealthy_threshold = 10
                }
              + name                = "target-pool-verify"
              + target_port         = 30081
              + targets             = [
                  + {
                      + display_name = "target-verify"
                      + ip           = "192.168.178.100"
                    },
                  + {
                      + display_name = "target-verify"
                      + ip           = "192.168.178.147"
                    },
                  + {
                      + display_name = "target-verify"
                      + ip           = "192.168.178.173"
                    },
                ]
            },
        ]
    }
DiogoFerrao commented 3 months ago

Hey @malt3, thank you for all the information and again for reporting the issue.

We have addressed the problem in the latest Release v0.17.0.

If the problem persists, feel free to reach out and reopen this issue.