gavinbunney / terraform-provider-kubectl

Terraform provider to handle raw kubernetes manifest yaml files
https://registry.terraform.io/providers/gavinbunney/kubectl
Mozilla Public License 2.0
622 stars 105 forks source link

Kubectl patch support #64

Open kristapsstrals opened 3 years ago

kristapsstrals commented 3 years ago

I've been wondering if there is any support planned for kubectl patch operation? This came about when I needed to patch an ingress-Nginx-controller service to add a custom annotation for DigitalOcean workaround. I quite like the option to use the remote YAML file for the ingress controller, somewhat like this:

variable "nginx_version" {
  type = string
  default = "0.41.2"
}

data "http" "nginx_ingress_controller" {
  url = "https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v${var.nginx_version}/deploy/static/provider/do/deploy.yaml"
}

resource "kubectl_manifest" "test" {
  yaml_body = data.http.nginx_ingress_controller.body
}

To add an annotation to the generated service, I'd just need to run something like

kubectl -n ingress-nginx patch service ingress-nginx-controller --type='json' -p='[{"op": "add", "path": "/metadata/annotations/service.beta.kubernetes.io~1do-loadbalancer-hostname", "value": {"workaround.example.com" } }]'

If I wanted to do that with kubectl apply, I'd need to know the exact service YAML, update that and run the kubectl apply.

romankor commented 3 years ago

@kristapsstrals did you find any way todo that ?

vigohe commented 3 years ago

And what could we expect when the resource it's deleted ?

ArtunSubasi commented 3 years ago

And what could we expect when the resource it's deleted ?

As evil as this may sound like, the simplest solution is to ignore the applied patch upon resource destruction. There is a similar feature request in HashiCorps terraform-kubernetes-provider repository. The linked issue is interesting because it lists multiple use cases describing why a patch is needed. In most of the use cases, a rollback of the patch is not really required.

alekc commented 2 years ago

I will try to give it a go and see what happens.

aidanmelen commented 2 years ago

I made a PR for the terraform-kubernetes-provider.

I tested by running an apply with the terraform below. Then I removed the terraform.tfstate file to simulate an already existing configmap. Then I ran another apply and it patched the resource!

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "docker-desktop"
}

resource "kubernetes_manifest" "test-configmap" {
  manifest = yamldecode(
    <<-EOT
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - rolearn: arn:aws:iam::111111111111:role/DemoEKS-NodeInstanceRole
          username: system:node:{{EC2PrivateDNSName}}
          groups:
            - system:bootstrappers
            - system:nodes
        - rolearn: arn:aws:iam::111111111111:role/TeamRole
          username: TeamRole
          groups:
          - system:masters
      mapUsers: |
        - userarn: arn:aws:iam::111111111111:user/sukumar-test-test
          username: sukumar
          groups:
            - system:masters
    EOT
  )

  field_manager {
    force_conflicts = true 
  }
}

Note: The kubernetes_manifest will only patch already existing resources when field_manager.force_conflicts = true.

mohang6770 commented 6 months ago

I made a PR for the terraform-kubernetes-provider.

I tested by running an apply with the terraform below. Then I removed the terraform.tfstate file to simulate an already existing configmap. Then I ran another apply and it patched the resource!

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "docker-desktop"
}

resource "kubernetes_manifest" "test-configmap" {
  manifest = yamldecode(
    <<-EOT
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: aws-auth
      namespace: kube-system
    data:
      mapRoles: |
        - rolearn: arn:aws:iam::111111111111:role/DemoEKS-NodeInstanceRole
          username: system:node:{{EC2PrivateDNSName}}
          groups:
            - system:bootstrappers
            - system:nodes
        - rolearn: arn:aws:iam::111111111111:role/TeamRole
          username: TeamRole
          groups:
          - system:masters
      mapUsers: |
        - userarn: arn:aws:iam::111111111111:user/sukumar-test-test
          username: sukumar
          groups:
            - system:masters
    EOT
  )

  field_manager {
    force_conflicts = true 
  }
}

Note: The kubernetes_manifest will only patch already existing resources when field_manager.force_conflicts = true. resource "kubernetes_manifest" "test-configmap" { manifest = yamldecode( <<-EOT apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd data: dex.config: | connectors:

  • type: ldap id: ldap name: LDAP config: host: uxxxxxxxxxxxxxx insecureNoSSL: true insecureSkipVerify: true startTLS: true bindDN: CN=k8s_bind,OU=Service Accounts,OU=Accounts,DC=xxxx-p,DC=com bindPW: $LDAP_PASSWORD userSearch: baseDN: "DC=xxxx-p,DC=com" filter: "(objectClass=person)" username: sAMAccountName idAttr: uidNumber emailAttr: mail nameAttr: name groupSearch: baseDN: "OU=Security Groups,DC=xxxxxx,DC=com" filter: "(objectClass=group)" userAttr: DN groupAttr: "member:1.2.840.113556.1.4.1941:" nameAttr: name EOT
    ) field_manager { force_conflicts = true } }

Getting Error: Error: Cannot create resource that already exists │

│ 52: resource "kubernetes_manifest" "test-configmap" { │ │ resource "argocd/argocd-cm" already exists