gavinbunney / terraform-provider-kubectl

Terraform provider to handle raw kubernetes manifest yaml files
https://registry.terraform.io/providers/gavinbunney/kubectl
Mozilla Public License 2.0
612 stars 105 forks source link

unpredictable behavior when deleting the provider #197

Open yleizour-splio opened 2 years ago

yleizour-splio commented 2 years ago

We are working on several kubernetes EKS clusters with a kubectl configuration with several contexts

they are named cluster-A and cluster-B

We have a deployment (a namespace for the example) on the 2 clusters. The deployment on booth cluster has been done via terraform with the provider kubectl sample code :

provider "aws" ...
}

data "aws_eks_cluster" "cluster" {
  name = "cluster-A"
}
data "aws_eks_cluster_auth" "cluster" {
  name = "cluster-A"
}
provider "kubectl" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  token                  = data.aws_eks_cluster_auth.cluster.token
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  load_config_file       = false
}

resource "kubectl_manifest" "test" {
  yaml_body = <<YAML
kind: Namespace
apiVersion: v1
metadata:
  name: dada1a24
  labels:
    name: dada1a24
YAML
}

For a test, we remove the provider kubectl and the resource kubectl_manifest to apply it on the cluster-A.

provider "aws" ...
}
data "aws_eks_cluster" "cluster" {
  name = "cluster-A"
}
data "aws_eks_cluster_auth" "cluster" {
  name = "cluster-A"
}

# provider "kubectl" {
#   host                   = data.aws_eks_cluster.cluster.endpoint
#   token                  = data.aws_eks_cluster_auth.cluster.token
#   cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
#   load_config_file       = false
# }

# resource "kubectl_manifest" "test" {
#   yaml_body = <<YAML
# kind: Namespace
# apiVersion: v1
# metadata:
#   name: dada1a24
#   labels:
#     name: dada1a24
# YAML

My local kubectl configuration is on cluster-B

kubectl config get-contexts
CURRENT   NAME                          CLUSTER            AUTHINFO       NAMESPACE
                  cluster-A                       cluster-A           cluster-A          default
*                cluster-B                        cluster-B           cluster-B          default

I launch my terraform apply

  # kubectl_manifest.test will be destroyed
  # (because kubectl_manifest.test is not in configuration)
  - resource "kubectl_manifest" "test" {
      - api_version             = "v1" -> null
      - apply_only              = false -> null
      - force_conflicts         = false -> null
      - force_new               = false -> null
      - id                      = "/api/v1/namespaces/dada1a24" -> null
      - kind                    = "Namespace" -> null
      - live_manifest_incluster = (sensitive value)
      - live_uid                = "6b384a9a-468e-483f-b114-7e621c8ddd89" -> null
      - name                    = "dada1a24" -> null
      - server_side_apply       = false -> null
      - uid                     = "0048eda2-33a3-47ce-b38b-3fe45566e5ca" -> null
      - validate_schema         = true -> null
      - wait_for_rollout        = true -> null
      - yaml_body               = (sensitive value)
      - yaml_body_parsed        = <<-EOT
            apiVersion: v1
            kind: Namespace
            metadata:
              labels:
                name: dada1a24
              name: dada1a24
        EOT -> null
      - yaml_incluster          = (sensitive value)
    }

No worries, that's the point :) Switch kubectl config and check

kubectl config use-context cluster-A
Switched to context cluster-A
kubectl get namespace | grep dada1a24                                                            
dada1a24              Active   62m

Strange, after applying the terraform, the namespace is still present on the cluster-A

check on cluster-B

kubectl config use-context cluster-B
Switched to context cluster-B
kubectl get namespace | grep dada1a24                                                                 
==> nothing
kubectl get namespace | grep dada1a24 | wc -l 
0

namespace has been deleted from cluster-B not cluster-A.

I think it's related to the fact that the default value of the load_config_file property is true. So when deleting the provider the local configuration is loaded and so can lead to unexpected behavior .

Wouldn't it be safer to change the default value to avoid this kind of behavior?