SepehrImanian / terraform-provider-haproxy

Terraform HAProxy Provider
https://registry.terraform.io/providers/SepehrImanian/haproxy
Apache License 2.0
30 stars 4 forks source link

Plugin Crashes #3

Open KiyoIchikawa opened 8 months ago

KiyoIchikawa commented 8 months ago

Prerequisites

Description

We are trying to use the plugin in a module and getting a hard crash. Our Terraform code layout is like this:

.
|__ README.md
|__ modules/
|     |__ module1/
|            |__ README.md
|            |__ versions.tf
|            |__ providers.tf
|            |__ variables.tf
|            |__ main.tf
|            |__ outputs.tf
|     |__ module2/
|            |__ README.md
|            |__ versions.tf
|            |__ providers.tf
|            |__ variables.tf
|            |__ main.tf
|            |__ outputs.tf
|__ development/
|    |__ appA/
|            |__ README.md
|            |__ main.tf
|            |__ versions.tf
|            |__ variables.tf
|__ stage/
|    |__ appA/
|            |__ README.md
|            |__ main.tf
|            |__ versions.tf
|            |__ variables.tf
|__ production/
|    |__ appA/
|            |__ README.md
|            |__ main.tf
|            |__ versions.tf
|            |__ variables.tf

References go like this: development/appA --calls--> modules/module2 --calls--> modules/module1

module2 uses the haproxy provider plugin and another provider plugin. The module code looks like this:

## Virtual Machine ##
module "module1" {
  source = "../module1"
  vsphere = {
    datacenter_cluster        = var.vsphere.datacenter_cluster
    compute_cluster_name      = var.vsphere.compute_cluster_name
    datastore                 = var.vsphere.datastore
    content_library           = var.vsphere.content_library
    content_library_item_name = var.vsphere.content_library_item_name
    network_name              = var.vsphere.network_name
  }
  virtual_machine = {
    host_name             = var.virtual_machine.host_name
    ipv4_address          = var.virtual_machine.ipv4_address
    ipv4_gateway          = var.virtual_machine.ipv4_gateway
    ipv4_netmask          = var.virtual_machine.ipv4_netmask
    dns_server_list       = var.virtual_machine.dns_server_list
    dns_suffix_list       = var.virtual_machine.dns_suffix_list
    is_windows_image      = true
    domain                = var.virtual_machine.domain
    domain_admin_user     = var.domain_join_username
    domain_admin_password = var.dmz_domain_join_password
  }
}

## Octopus Deploy ##
data "octopusdeploy_environments" "environments" {
  ids = var.octopusdeploy_environments
}
data "octopusdeploy_space" "default" {
  name = var.octopusdeploy_space_name
}
data "octopusdeploy_machine_policies" "default" {
  ids      = var.octopusdeploy_machine_policies
  space_id = data.octopusdeploy_space.default.id
}
resource "octopusdeploy_listening_tentacle_deployment_target" "example" {
  environments = [for e in data.octopusdeploy_environments.environments.environments : e.name]
  is_disabled  = false
  name         = var.name
  roles        = var.octopusdeploy_target_roles
  tentacle_url = "https://${var.name}:10933/"
  thumbprint   = var.octopusdeploy_server_thumbprint
}

## HAProxy ##
data "haproxy_backend" "backend" {
  name = var.haproxy_backend_name
}
resource "haproxy_server" "server" {
  name        = "{{ var.virtual_machine.host_name }}.{{ var.virtual_machine.domain }}"
  port        = 443
  address     = var.virtual_machine.ipv4_address
  parent_name = data.haproxy_backend.backend.id
  parent_type = "backend"
  check       = true
  inter       = 2000
  rise        = 5
  fall        = 2
  depends_on  = [data.haproxy_backend.backend]
}

The calling module looks like this:

provider "vsphere" {
  user           = var.vsphere_username
  password       = sensitive(var.vsphere_password)
  vsphere_server = var.vsphere_server
}
provider "octopusdeploy" {
  address = var.octopusdeploy_address
  api_key = var.octopusdeploy_api_key
}
provider "haproxy" {
  url      = "https://example.internal"
  username = var.haproxy_dataplane_username
  password = var.haproxy_dataplane_password
}

locals {
  instances = tomap({
    server01 : "xxx.xxx.xxx.xxx",
    server02 : "xxx.xxx.xxx.xxx"
  })
  domain                            = "domain"
  ipv4_gateway                      = "xxx.xxx.xxx.xxx"
  ipv4_netmask                      = 24
  dns_server_list                   = ["xxx.xxx.xxx.xxx"]
  dns_suffix_list                   = ["suffix1"]
  vsphere_datacenter_cluster        = "datacenter cluster"
  vsphere_compute_cluster_name      = "datacenter cluster"
  vsphere_resource_pool_name        = "datacenter cluster/resources"
  vsphere_datastore                 = "datastore"
  vsphere_content_library_item_name = "win-2022-std-core"
  vsphere_network_name              = "Network-Dev"
  octopusdeploy_target_roles = [
    "role1"
  ]
}

module "web_servers" {
  source = "../../modules/web_servers"
  providers = {
    octopusdeploy = octopusdeploy
    haproxy       = haproxy
  }
  for_each = local.instances
  vsphere = {
    datacenter_cluster        = var.vsphere_datacenter_cluster
    compute_cluster_name      = local.vsphere_compute_cluster_name
    datastore                 = local.vsphere_datastore
    content_library           = var.vsphere_content_library
    content_library_item_name = local.vsphere_content_library_item_name
    network_name              = local.vsphere_network_name
  }
  virtual_machine = {
    host_name             = each.key
    ipv4_address          = each.value
    ipv4_gateway          = local.ipv4_gateway
    ipv4_netmask          = local.ipv4_netmask
    dns_server_list       = local.dns_server_list
    dns_suffix_list       = local.dns_suffix_list
    is_windows_image      = true
    domain                = local.domain
    domain_admin_user     = var.domain_join_username
    domain_admin_password = var.sredmz_domain_join_password
  }
  octopusdeploy_server_thumbprint = var.octopusdeploy_server_thumbprint
  octopusdeploy_environments      = var.octopusdeploy_environments
  octopusdeploy_machine_policies  = var.octopusdeploy_machine_policy_ids
  octopusdeploy_space_name        = var.octopusdeploy_space_name
  name                            = "${each.key}.${local.domain}"
  octopusdeploy_target_roles      = local.octopusdeploy_target_roles
  haproxy_backend_name            = "some_443-backend"
}

Steps to Reproduce

Create setup similar to above to use the haproxy provider plugin.

Expected behavior: Server "server01.domain" to be added to "some_443-backend" in HAProxy.

Actual behavior: Plugin crashes with following error:

│ Error: Plugin did not respond
│ 
│   with module.module2["server01"].data.haproxy_backend.backend,
│   on ../../modules/module2/providers.tf line 47, in data "haproxy_backend" "backend":
│   47: data "haproxy_backend" "backend" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs
│ may contain more details.
╵
╷
│ Error: Request cancelled
│ 
│   with module.module2["server02"].data.haproxy_backend.backend,
│   on ../../modules/module2/providers.tf line 47, in data "haproxy_backend" "backend":
│   47: data "haproxy_backend" "backend" {
│ 
│ The plugin.(*GRPCProvider).ReadDataSource request was cancelled.
╵

Stack trace from the terraform-provider-haproxy_v0.0.7 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x10 pc=0x104ce49dc]

goroutine 56 [running]:
terraform-provider-haproxy/internal/backend.dataSourceHaproxyABackendRead(0x1400010f5f0?, {0x1050a1780?, 0x14000138b70})
        terraform-provider-haproxy/internal/backend/data_source.go:37 +0x10c
github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).ReadDataApply(0x1400048be00, 0x1400010f530?, {0x1050a1780, 0x14000138b70})
        github.com/hashicorp/terraform-plugin-sdk@v1.17.2/helper/schema/resource.go:413 +0x5c
github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).ReadDataApply(0x14000518680, 0x1400053b9a0, 0x1050a1780?)
        github.com/hashicorp/terraform-plugin-sdk@v1.17.2/helper/schema/provider.go:451 +0x60
github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ReadDataSource(0x1400000ef80, {0x140002c7c40?, 0x1400049cc40?}, 0x140002c7c40)
        github.com/hashicorp/terraform-plugin-sdk@v1.17.2/internal/helper/plugin/grpc_provider.go:1046 +0x29c
github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ReadDataSource_Handler({0x1051c1140?, 0x1400000ef80}, {0x10520bbb0, 0x14000139470}, 0x1400049cc40, 0x0)
        github.com/hashicorp/terraform-plugin-sdk@v1.17.2/internal/tfplugin5/tfplugin5.pb.go:3341 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0x14000338000, {0x105210920, 0x14000103380}, 0x1400021a000, 0x140004a3e00, 0x1058ba3d0, 0x0)
        google.golang.org/grpc@v1.56.0/server.go:1337 +0xc90
google.golang.org/grpc.(*Server).handleStream(0x14000338000, {0x105210920, 0x14000103380}, 0x1400021a000, 0x0)
        google.golang.org/grpc@v1.56.0/server.go:1714 +0x82c
google.golang.org/grpc.(*Server).serveStreams.func1.1()
        google.golang.org/grpc@v1.56.0/server.go:959 +0x84
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/grpc@v1.56.0/server.go:957 +0x16c

Error: The terraform-provider-haproxy_v0.0.7 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Reproduces how often: [What percentage of the time does it reproduce?]

Versions

Terraform v1.7.0 on darwin_arm64

Additional Information

Any additional information, configuration or data that might be necessary to reproduce the issue.

SepehrImanian commented 7 months ago

Hi, Thank you for taking the time to report this issue, Did you create "haproxy_backend" terraform resource first ?

for example like this:

resource "haproxy_backend" "backend" {
  name  = ...
  mode  = "tcp"
  balance {
    algorithm = "..."
  }
}
data "haproxy_backend" "backend" {
  name = var.haproxy_backend_name
}

the err is:

with module.module2["server02"].data.haproxy_backend.backend,
│   on ../../modules/module2/providers.tf line 47, in data "haproxy_backend" "backend":
│   47: data "haproxy_backend" "backend" {

It's possible that the 'haproxy_backend' doesn't exist.

@KiyoIchikawa

KiyoIchikawa commented 7 months ago

I'm glad I can help out, thank you for creating this provider! We appreciate your time on this.

The "haproxy_backend" is not managed by Terraform and does exist in the HAProxy instance given to the provider. We are hoping to manage the base HAProxy configuration via a different tool and insert servers into pre-existing backends (not managed by Terraform) as the servers are created by Terraform.