rancher / terraform-provider-rancher2

Terraform Rancher2 provider
https://www.terraform.io/docs/providers/rancher2/
Mozilla Public License 2.0
260 stars 223 forks source link

rancher2_app_v2 import doesn't import enough data and needs to recreate everything #980

Open MrLuje opened 2 years ago

MrLuje commented 2 years ago

Hello,

$ terraform import rancher2_app_v2.istio xxxxx.rancher-istio

rancher2_app_v2.istio: Importing from ID "xxx.rancher-istio"... rancher2_app_v2.istio: Import prepared! Prepared rancher2_app_v2 for import rancher2_app_v2.istio: Refreshing state... [id=xxx.rancher-istio]

Import successful!

The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform.


## Current result 
* Make a terraform plan (there should be no change)
```shell
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # rancher2_app_v2.istio must be replaced
-/+ resource "rancher2_app_v2" "istio" {
      ~ annotations                 = {} -> (known after apply)
      + chart_name                  = "rancher-istio"
      + chart_version               = "100.4.0+up1.14.1"
      + cleanup_on_fail             = false
      ~ cluster_name                = "flex-dev-rci" -> (known after apply)
      + disable_hooks               = false
      + disable_open_api_validation = false
      + force_upgrade               = false
      ~ id                          = "xxx.rancher-istio" -> (known after apply)
      ~ labels                      = {} -> (known after apply)
      + name                        = "rancher-istio" # forces replacement
      + namespace                   = "istio-system" # forces replacement
      + repo_name                   = "rancher-charts"
      + system_default_registry     = (known after apply)
      + wait                        = true
        # (1 unchanged attribute hidden)

      - timeouts {}
    }

Expected result

There should be almost no change, but it shouldn't try to destroy the resource

mikekuzak commented 2 years ago

I worked around this with ignore changes

resource "rancher2_app_v2" "rancher-istio" {
  count = var.k8s_cluster_istio ? 1 : 0

  # Rancher-istio requires rancher-monitoring
  depends_on = [rancher2_app_v2.applications]

  provider      = rancher2.admin
  cluster_id    = rancher2_cluster_sync.cluster-sync.id
  project_id    = data.rancher2_project.system.id
  name          = "rancher-istio"
  namespace     = "istio-system"
  repo_name     = "rancher-charts"
  chart_name    = "rancher-istio"
  chart_version = "1.7.301"
  values        = file("${path.module}/applications/istio.yaml")

  cleanup_on_fail = true

  lifecycle {
    ignore_changes = [
      cluster_id, project_id
    ]
  }

  timeouts {
    create = "15m"
    update = "15m"
  }
}
richard-mck commented 1 year ago

I've got the same issue. Monitoring was installed via the GUI, I'm now working to set it up on other clusters via Terraform and have imported the existing install.

#terraform/rancher_cluster$> terraform import rancher2_app_v2.rancher_monitoring $CLUSTER_ID.rancher-monitoring
rancher2_app_v2.rancher_monitoring: Importing from ID "$CLUSTER_ID.rancher-monitoring"...
rancher2_app_v2.rancher_monitoring: Import prepared!
  Prepared rancher2_app_v2 for import
rancher2_app_v2.rancher_monitoring: Refreshing state... [id=$CLUSTER_ID.rancher-monitoring]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

#terraform/rancher_cluster$> terraform plan
... Refresh output removed ...

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # rancher2_app_v2.rancher_monitoring must be replaced
-/+ resource "rancher2_app_v2" "rancher_monitoring" {
      ~ annotations                 = {} -> (known after apply)
      + chart_name                  = "rancher-monitoring"
      + chart_version               = (known after apply)
      + cleanup_on_fail             = false
      ~ cluster_name                = "$CLUSTER_NAME" -> (known after apply)
      + disable_hooks               = false
      + disable_open_api_validation = false
      + force_upgrade               = false
      ~ id                          = "$CLUSTER_ID.rancher-monitoring" -> (known after apply)
      ~ labels                      = {} -> (known after apply)
      + name                        = "rancher-monitoring" # forces replacement
      + namespace                   = "cattle-monitoring-system" # forces replacement
      + repo_name                   = "rancher-charts"
      + system_default_registry     = (known after apply)
      + wait                        = true
        # (1 unchanged attribute hidden)

      - timeouts {}
    }

Plan: 1 to add, 0 to change, 1 to destroy.

In my case, I haven't done any significant config as yet so replacing the install isn't a huge issue but I can see situations where this might be a problem.

It also appears that attempts replace the resource may fail:

#terraform/rancher_cluster$> terraform apply
 ... Refresh output removed ...

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # rancher2_app_v2.rancher_monitoring must be replaced
-/+ resource "rancher2_app_v2" "rancher_monitoring" {
      ~ annotations                 = {} -> (known after apply)
      + chart_name                  = "rancher-monitoring"
      + chart_version               = (known after apply)
      + cleanup_on_fail             = false
      ~ cluster_name                = "$CLUSTER_NAME" -> (known after apply)
      + disable_hooks               = false
      + disable_open_api_validation = false
      + force_upgrade               = false
      ~ id                          = "$CLUSTER_ID.rancher-monitoring" -> (known after apply)
      ~ labels                      = {} -> (known after apply)
      + name                        = "rancher-monitoring" # forces replacement
      + namespace                   = "cattle-monitoring-system" # forces replacement
      + repo_name                   = "rancher-charts"
      + system_default_registry     = (known after apply)
      + wait                        = true
        # (1 unchanged attribute hidden)

      - timeouts {}
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

rancher2_app_v2.rancher_monitoring: Destroying... [id=$CLUSTER_ID.rancher-monitoring]
╷
│ Error: Error removing App V2 : action [uninstall] not available on [&{ collection map[self:https://rancher.$DOMAIN_NAME/k8s/clusters/$CLUSTER_ID/v1/catalog.cattle.io.apps/rancher-monitoring] map[]}]

My actual app config is minimal:

resource "rancher2_app_v2" "rancher_monitoring" {
  chart_name = "rancher-monitoring"
  cluster_id = rancher2_cluster.main.id
  name       = "rancher-monitoring"
  namespace  = "cattle-monitoring-system"
  repo_name  = "rancher-charts"
}
abinet commented 1 year ago

We tried the app2 import as well and had same issue. Apparently the format of the resource id is wrong. It should be namespaced terraform import rancher2_app_v2.istio <cluster-id>.<namespace>/<app-name>

snasovich commented 1 year ago

This issue is for Apps & Marketplace related functionality so changed team Mapps. FYI @gunamata - wondering if you can fit it into your team's backlog and how it ranks among other things on your team's plate.

gunamata commented 1 year ago

@snasovich , Sure. We will review this during our triage meeting, however, this will be considered mostly for a release after the Q4 release.