hashicorp / terraform-provider-kubernetes

Terraform Kubernetes provider
https://www.terraform.io/docs/providers/kubernetes/
Mozilla Public License 2.0
1.58k stars 965 forks source link

Projected volume gets replaced on every apply #1358

Open rymnc opened 3 years ago

rymnc commented 3 years ago

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.0.0
Kubernetes provider version: v2.3.2
Kubernetes version: 1.20.8-gke.900

Affected Resource(s)

Terraform Configuration Files

# If the deployment does require zzz and has secrets of its own
dynamic "volume" {
  for_each = local.secret_length_gt_0 && var.inject_db_url ? [1] : []
  content {
    name = "${var.service_name}-secrets"
    projected {
      sources {
        secret {
          name     = "${var.service_name}-secrets"
          optional = false
        }
        secret {
          name     = "zzz-secrets"
          optional = false
        }
     }
    }
  }
}
# If the deployment does not require zzz and has secrets of its own
dynamic "volume" {
  for_each = local.secret_length_gt_0 && !var.inject_db_url ? [1] : []
  content {
    name = "${var.service_name}-secrets"
    secret {
      secret_name = "${var.service_name}-secrets"
    }
  }
}
# If the deployment requires zzz but doesn't have any secrets of its own
dynamic "volume" {
  for_each = !local.secret_length_gt_0 && var.inject_db_url ? [1] : []
  content {
    name = "${var.service_name}-secrets"
    secret {
      secret_name = "zzz-secrets"
    }
  }
}

Assume the following inputs -

var.inject_db_url = true
local.secret_length_gt_0 = true
var.service_name = foobar

Steps to Reproduce

  1. terraform apply --> creates the deployment with the volume
  2. terraform apply --> the volume must be replaced

Expected Behavior

It should apply the second time without any changes

Actual Behavior

The second secret in volume.projected.sources is always replaced. Example -

volume {
                        name = "service-secrets"

                      ~ projected {
                            # (1 unchanged attribute hidden)

                          ~ sources {

                              + secret {
                                  + name     = "zzz-secrets"
                                  + optional = false
                                }
                                # (1 unchanged block hidden)
                            }
                          - sources {

                              - secret {
                                  - name     = "zzz-secrets" -> null
                                  - optional = false -> null
                                }
                            }
                        }
                    }

The state for the service above ->

volume {
                    name = "service-secrets"

                    projected {
                        default_mode = "0644"

                        sources {

                            secret {
                                name     = "service-secrets"
                                optional = false
                            }
                        }
                        sources {

                            secret {
                                name     = "zzz-secrets"
                                optional = false
                            }
                        }
                    }
                }

Is it intended to store them in different sources?

Important Factoids

The volume setup is made so that only if the service requires the zzz secret, it is mounted as a projected volume with the existing secrets (if any). Only one of these volumes must be mounted.

The issue only arises with the projected volume

Community Note

rymnc commented 3 years ago

On further investigation of the state, it looks like instead of using 1 source with multiple secrets, the provider is placing each secret in a different source. I assume this is why it wants to update the deployment every time Example -

{
  "projected": [
    {
      "default_mode": "0644",
      "sources": [
        {
          "config_map": [],
          "downward_api": [],
          "secret": [
            {
              "items": [],
              "name": "service-secrets",
              "optional": false
            }
          ],
          "service_account_token": []
        },
        {
          "config_map": [],
          "downward_api": [],
          "secret": [
            {
              "items": [],
              "name": "zzz-secrets",
              "optional": false
            }
          ],
          "service_account_token": []
        }
      ]
    }
  ]
}

I've modified the volume config to look like this

dynamic "volume" {
  for_each = local.secret_length_gt_0 && var.inject_db_url ? [1] : []
  content {
    name = "${var.service_name}-secrets"
    projected {
      sources {
        secret {
          name     = "${var.service_name}-secrets"
          optional = false
        }
      }
      sources {
        secret {
          name     = "zzz-secrets"
          optional = false
        }
      }
     }
    }
  }
}

It seems to work fine. There should probably be a check in the provider to disallow multiple secrets in the same source

github-actions[bot] commented 2 years ago

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

dico-harigkev commented 2 years ago

We still see the same behavior in a job resource with a projected config_map and secret in the same volume.

Terraform v1.2.5 - provider registry.terraform.io/hashicorp/kubernetes v2.12.1

Notably, the perpetual diff does not resolve by updating in-place to two separate source blocks (one per secret/config_map) and by adding the (according to the documentation optional) optional = false parameter. It still tries to change the order of the source blocks around in each plan, but doesn't seem to update it to its preferred order in the state then. Perhaps during plan there is an ordered comparison and during apply it is unordered?

Only tainting and recreating the resource in the exact shape and order as shown in the old plans has fixed it.

Example snippet with perpetual diff (part of a kubernetes_job_v1 spec.template.spec.volume)

        volume {
          name = "v"
          projected {
            sources {
              config_map {
                name = "c"
              }
              secret {
                name = "s"
              }
            }
          }
        }

Fixed example, after taint/recreation:

        volume {
          name = "v"
          projected {
            sources {
              config_map {
                name = "c"
                optional = false
              }
            }
            sources {
              secret {
                name = "s"
                optional = false
              }
            }
          }
        }
github-actions[bot] commented 1 year ago

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

alk-jozog commented 12 months ago

Not stale