ovh / terraform-provider-ovh

Terraform OVH provider
https://registry.terraform.io/providers/ovh/ovh/latest/docs
Mozilla Public License 2.0
182 stars 133 forks source link

[BUG] Error: Provider produced inconsistent final plan (resource ovh_cloud_project_database with changed node subnet) #688

Open mwacker-sms opened 1 month ago

mwacker-sms commented 1 month ago

Describe the bug

I had connection issues with a private network (database was suddenly unreachable) and reconfigured the subnet of the private network.

But then i ran into this bug and got stuck.

It might be also/additionally an issue with an inconsistency in the OVH Api, since there seem to be some issues with the managed database resources.

Terraform Version

Terraform v1.9.2

OVH Terraform Provider Version

ovh/ovh v0.46.1

Affected Resource(s)

This is not an issue with Terraform core as other providers seem to work just fine, or restarting the plan/apply.

Terraform Configuration Files

(shortenend)

terraform {
  required_providers {
    ovh = {
      source = "ovh/ovh"
      #using env variables for cliend-id and client-secret
    }
  }
}

# Configure the OVHcloud Provider
provider "ovh" {
  endpoint           = "ovh-eu"
}

locals {
  net101_network_id = tolist(ovh_cloud_project_network_private.net101.regions_attributes[*].openstackid)[0]
  net101_vlan = 101
}

resource "ovh_cloud_project_network_private" "net101" {
  service_name = var.project
  name         = "test-${local.net101_vlan}"
  regions      = [var.region]
  vlan_id      = local.net101_vlan
}

resource "ovh_cloud_project_network_private_subnet" "subnet_10" {
  service_name = ovh_cloud_project_network_private.net101.service_name
  network_id   = ovh_cloud_project_network_private.net101.id
  region       = var.region
  start        = "10.${ovh_cloud_project_network_private.net101.vlan_id}.10.1"
  end          = "10.${ovh_cloud_project_network_private.net101.vlan_id}.10.254"
  network      = "10.${ovh_cloud_project_network_private.net101.vlan_id}.10.0/24"
  dhcp         = true
  no_gateway   = true
  depends_on   = [ovh_cloud_project_network_private.net101]
}

resource "ovh_cloud_project_database" "mongodb" {
  service_name =  var.project
  description  = "mongodb-${var.stage}"
  engine       = "mongodb"
  version      = var.mongodb_version
  plan         = var.mongodb_plan
  flavor       = var.mongodb_flavor
  disk_size    = var.mongodb_size
  nodes {
    region     = var.region_area
    network_id = local.net101_network_id
    subnet_id  = ovh_cloud_project_network_private_subnet.subnet_10.id
  }
  nodes {
    region     = var.region_area
    network_id = local.net101_network_id
    subnet_id  = ovh_cloud_project_network_private_subnet.subnet_10.id
  }
  nodes {
    region     = var.region_area
    network_id = local.net101_network_id
    subnet_id  = ovh_cloud_project_network_private_subnet.subnet_10.id
  }
  depends_on = [ovh_cloud_project_network_private_subnet.subnet_10]
  ip_restrictions {
    description = "internal-${ovh_cloud_project_network_private.net101.name}-10"
    ip          = ovh_cloud_project_network_private_subnet.subnet_10.network
  }
  backup_regions = [var.backup_region_area]
  backup_time    = "01:00:00" #UTC Time
  timeouts {
    create = "1h"
    update = "1h"
    delete = "1h"
  }
}

Debug Output

(since the terraform scripts are run in a CI pipeline i do not have a debug output right now. I will try to re-run this scripts locally and will provide a debug output then)

Panic Output

No panic output given.

Expected Behavior

The change in the ip restrictions of the ovh_cloud_project_database.mongodb resource would be succesfully be applied.

Actual Behavior

Produces an error and stops the terraform deployment.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan out terraform.plan

    + terraform plan -out terraform-infra.plan
    data.ovh_cloud_project.xxx: Reading...
    data.ovh_cloud_project.xxx: Read complete after 0s
    ovh_cloud_project_network_private.net101: Refreshing state... [id=pn-XXXX_101]
    ovh_cloud_project_network_private_subnet.subnet_10: Refreshing state... [id=8bd8f077-b60d-4ae2-80d2-xxxxx]
    ovh_cloud_project_kube.xxx: Refreshing state... [id=ae9b4359-96c9-473b-8d5c-xxxx]
    ovh_cloud_project_kube_iprestrictions.bitbucket_and_dsvnet_only: Refreshing state... [id=ae9b4359-96c9-473b-8d5c-xxxx]
    ovh_cloud_project_kube_nodepool.node_pool: Refreshing state... [id=a4a97be8-79c2-4db8-88e1-xxxx]
    Terraform used the selected providers to generate the following execution
    plan. Resource actions are indicated with the following symbols:
    + create
    ~ update in-place
    Terraform will perform the following actions:
    # ovh_cloud_project_database.mongodb will be created
    + resource "ovh_cloud_project_database" "mongodb" {
      + advanced_configuration = (known after apply)
      + backup_regions         = [
          + "SBG",
        ]
      + backup_time            = "01:00:00"
      + created_at             = (known after apply)
      + description            = "mongodb-test"
      + disk_size              = 10
      + disk_type              = (known after apply)
      + endpoints              = (known after apply)
      + engine                 = "mongodb"
      + flavor                 = "db2-2"
      + id                     = (known after apply)
      + maintenance_time       = (known after apply)
      + network_type           = (known after apply)
      + plan                   = "production"
      + service_name           = "project-id...."
      + status                 = (known after apply)
      + version                = "6.0"
      + ip_restrictions {
          + description = "internal-test-101-10"
          + ip          = "10.28.10.0/24"
          + status      = (known after apply)
        }
      + nodes {
          + network_id = "701e0994-5a87-4d38-9bd6-xxx"
          + region     = "DE"
          + subnet_id  = "8bd8f077-b60d-4ae2-80d2-xxx"
        }
      + nodes {
          + network_id = "701e0994-5a87-4d38-9bd6-xxx"
          + region     = "DE"
          + subnet_id  = "8bd8f077-b60d-4ae2-80d2-xxx"
        }
      + nodes {
          + network_id = "701e0994-5a87-4d38-9bd6-xxx"
          + region     = "DE"
          + subnet_id  = "8bd8f077-b60d-4ae2-80d2-xxx"
        }
      + timeouts {
          + create = "1h"
          + delete = "1h"
          + update = "1h"
        }
    }
    # ovh_cloud_project_database_mongodb_user.mongodb-user will be created
    + resource "ovh_cloud_project_database_mongodb_user" "mongodb-user" {
      + cluster_id   = (known after apply)
      + created_at   = (known after apply)
      + id           = (known after apply)
      + name         = "mongodb_test@admin"
      + password     = (sensitive value)
      + roles        = [
          + "readWriteAnyDatabase@admin",
        ]
      + service_name  = "project-id...."
      + status       = (known after apply)
    }
    # ovh_cloud_project_kube.xxx will be updated in-place
    ~ resource "ovh_cloud_project_kube" "xxx" {
        id                          = "ae9b4359-96c9-473b-8d5c-xxx"
        name                        = "test-cluster"
      ~ nodes_subnet_id             = "498edc06-3be9-4232-80d0-xxx" -> "8bd8f077-b60d-4ae2-80d2-xxx"
        # (15 unchanged attributes hidden)
        # (1 unchanged block hidden)
    }
    # ovh_cloud_project_network_private.net101 will be updated in-place
    ~ resource "ovh_cloud_project_network_private" "net101" {
        id                 = "pn-xxx_101"
      ~ name               = "test-28" -> "test-101"
        # (7 unchanged attributes hidden)
    }
    Plan: 2 to add, 2 to change, 0 to destroy.
    Changes to Outputs:
    ~ mongodb_password           = (sensitive value)
    ~ mongodb_uri                = "mongodb+srv://<username>:<password>@mongodb-xxx.database.cloud.ovh.net/admin?replicaSet=replicaset&tls=true" -> (known after apply)
    ─────────────────────────────────────────────────────────────────────────────
    Saved the plan to: terraform.plan
    To perform exactly these actions, run the following command to apply:
    terraform apply "terraform.plan"
  2. terraform apply -auto-approve terraform.plan

+ terraform apply -auto-approve terraform-infra.plan
ovh_cloud_project_network_private.net101: Modifying... [id=pn-XXXX_101]
ovh_cloud_project_network_private.net101: Modifications complete after 5s [id=pn-XXXX_101]
ovh_cloud_project_kube.xxx: Modifying... [id=ae9b4359-96c9-473b-8d5c-xxxx]
ovh_cloud_project_kube.xxx: Modifications complete after 0s [id=ae9b4359-96c9-473b-8d5c-xxxx]
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for ovh_cloud_project_database.mongodb to include
│ new values learned so far during apply, provider
│ "[registry.terraform.io/ovh/ovh](http://registry.terraform.io/ovh/ovh)" produced an invalid new value for
│ .ip_restrictions: planned set element
│ cty.ObjectVal(map[string]cty.Value{"description":cty.StringVal("internal-test-101-10"),
│ "ip":cty.StringVal("10.101.10.0/24"), "status":cty.UnknownVal(cty.String)})
│ does not correlate with any element in actual.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵
  1. I then deleted the database resource manually, but the error with the now even missing database resource continues...
mwacker-sms commented 1 month ago

might have been an issue with this private network, after i deleted all resources and the state file everything went through...