gavinbunney / terraform-provider-kubectl

Terraform provider to handle raw kubernetes manifest yaml files
https://registry.terraform.io/providers/gavinbunney/kubectl
Mozilla Public License 2.0
612 stars 105 forks source link

failed to create all resources on apparently successful run #237

Open nbwest opened 1 year ago

nbwest commented 1 year ago

Trying to use kubectl provider to install an Nginx Ingress Controller to an EKS cluster in a gitlab pipeline was apparently successful, however not all resources were created.

ingress_nginx_controller.tf

data "http" "ingress_nginx_controller" {
  url = "https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/aws/deploy.yaml"
}
resource "kubectl_manifest" "ingress-nginx-controller" {
  yaml_body = data.http.ingress_nginx_controller.response_body
}

ci job output

kubectl_manifest.ingress-nginx-controller: Creation complete after 2s [id=/api/v1/namespaces/ingress-nginx]
Apply complete! Resources: 3 added, 0 changed, 1 destroyed.

Checking with kubectl gives the following, so I know it's succeeded in running

$ kubectl api-resources --verbs=list --namespaced -o name   | xargs -n 1 kubectl get --show-kind --ignore-not-found  -n ingress-nginx
NAME                         DATA   AGE
configmap/kube-root-ca.crt   1      20m
NAME                         TYPE                                  DATA   AGE
secret/default-token-w8s92   kubernetes.io/service-account-token   3      20m
NAME                     SECRETS   AGE
serviceaccount/default   1         20m

But applying the same manifest again with kubectl creates a lot more resources

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/aws/deploy.yaml
namespace/ingress-nginx unchanged
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
danielserrao commented 1 year ago

Have similar issue having one manifest with resources separated by --- and only the resource on the top would be deployed using this TF resource. If deploy using kubectl apply -f with the exact same manifest, all resources are deployed successfully.

DrkCloudStrife commented 1 year ago

I was having a similar issue. What worked for me was to use for_each with data "kubectl_file_documents" in the resource "kubectl_manifest". Taking your example @nbwest , I was able to get all your manifest configs to be planned accordingly by doing the following:

# ingress_nginx_controller.tf

# Get the deploy.yaml config file
data "http" "ingress_nginx_controller" {
  url = "https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/aws/deploy.yaml"
}

# pass the content to kubectl_file_documents. https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/data-sources/kubectl_file_documents
data "kubectl_file_documents" "ingress_nginx_config" {
  content = data.http.ingress_nginx_controller.response_body
}

# Use kubectl_file_documents to split multi-document into the kubectl_manifest resource
resource "kubectl_manifest" "ingress-nginx-controller" {
  for_each = data.kubectl_file_documents.ingress_nginx_config.manifests
  yaml_body = each.value
}

The plan output is rather large for me to paste here, but here's a snippet:

...
# module.helm.module.minikube[0].kubectl_manifest.ingress-nginx-controller["/apis/rbac.authorization.k8s.io/v1/namespaces/ingress-nginx/roles/ingress-nginx-admission"] will be created
  + resource "kubectl_manifest" "ingress-nginx-controller" {
      + api_version             = "rbac.authorization.k8s.io/v1"
      + apply_only              = false
      + force_conflicts         = false
      + force_new               = false
      + id                      = (known after apply)
      + kind                    = "Role"
      + live_manifest_incluster = (sensitive value)
      + live_uid                = (known after apply)
      + name                    = "ingress-nginx-admission"
      + namespace               = "ingress-nginx"
      + server_side_apply       = false
      + uid                     = (known after apply)
      + validate_schema         = true
      + wait_for_rollout        = true
      + yaml_body               = (sensitive value)
      + yaml_body_parsed        = <<-EOT
            apiVersion: rbac.authorization.k8s.io/v1
            kind: Role
            metadata:
              labels:
                app.kubernetes.io/component: admission-webhook
                app.kubernetes.io/instance: ingress-nginx
                app.kubernetes.io/name: ingress-nginx
                app.kubernetes.io/part-of: ingress-nginx
                app.kubernetes.io/version: 1.2.0
              name: ingress-nginx-admission
              namespace: ingress-nginx
            rules:
            - apiGroups:
              - ""
              resources:
              - secrets
              verbs:
              - get
              - create
        EOT
      + yaml_incluster          = (sensitive value)
    }

Plan: 19 to add, 0 to change, 0 to destroy.