Open kristapsstrals opened 3 years ago
@kristapsstrals did you find any way todo that ?
And what could we expect when the resource it's deleted ?
And what could we expect when the resource it's deleted ?
As evil as this may sound like, the simplest solution is to ignore the applied patch upon resource destruction. There is a similar feature request in HashiCorps terraform-kubernetes-provider repository. The linked issue is interesting because it lists multiple use cases describing why a patch is needed. In most of the use cases, a rollback of the patch is not really required.
I will try to give it a go and see what happens.
I made a PR for the terraform-kubernetes-provider.
I tested by running an apply with the terraform below. Then I removed the terraform.tfstate file to simulate an already existing configmap. Then I ran another apply and it patched the resource!
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "docker-desktop"
}
resource "kubernetes_manifest" "test-configmap" {
manifest = yamldecode(
<<-EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::111111111111:role/DemoEKS-NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111111111111:role/TeamRole
username: TeamRole
groups:
- system:masters
mapUsers: |
- userarn: arn:aws:iam::111111111111:user/sukumar-test-test
username: sukumar
groups:
- system:masters
EOT
)
field_manager {
force_conflicts = true
}
}
Note: The kubernetes_manifest
will only patch already existing resources when field_manager.force_conflicts = true
.
I made a PR for the terraform-kubernetes-provider.
I tested by running an apply with the terraform below. Then I removed the terraform.tfstate file to simulate an already existing configmap. Then I ran another apply and it patched the resource!
provider "kubernetes" { config_path = "~/.kube/config" config_context = "docker-desktop" } resource "kubernetes_manifest" "test-configmap" { manifest = yamldecode( <<-EOT apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: arn:aws:iam::111111111111:role/DemoEKS-NodeInstanceRole username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes - rolearn: arn:aws:iam::111111111111:role/TeamRole username: TeamRole groups: - system:masters mapUsers: | - userarn: arn:aws:iam::111111111111:user/sukumar-test-test username: sukumar groups: - system:masters EOT ) field_manager { force_conflicts = true } }
Note: The
kubernetes_manifest
will only patch already existing resources whenfield_manager.force_conflicts = true
. resource "kubernetes_manifest" "test-configmap" { manifest = yamldecode( <<-EOT apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd data: dex.config: | connectors:
- type: ldap id: ldap name: LDAP config: host: uxxxxxxxxxxxxxx insecureNoSSL: true insecureSkipVerify: true startTLS: true bindDN: CN=k8s_bind,OU=Service Accounts,OU=Accounts,DC=xxxx-p,DC=com bindPW: $LDAP_PASSWORD userSearch: baseDN: "DC=xxxx-p,DC=com" filter: "(objectClass=person)" username: sAMAccountName idAttr: uidNumber emailAttr: mail nameAttr: name groupSearch: baseDN: "OU=Security Groups,DC=xxxxxx,DC=com" filter: "(objectClass=group)" userAttr: DN groupAttr: "member:1.2.840.113556.1.4.1941:" nameAttr: name EOT
) field_manager { force_conflicts = true } }
Getting Error: Error: Cannot create resource that already exists │
│ 52: resource "kubernetes_manifest" "test-configmap" { │ │ resource "argocd/argocd-cm" already exists
I've been wondering if there is any support planned for kubectl patch operation? This came about when I needed to patch an
ingress-Nginx-controller
service to add a custom annotation for DigitalOcean workaround. I quite like the option to use the remote YAML file for the ingress controller, somewhat like this:To add an annotation to the generated service, I'd just need to run something like
If I wanted to do that with
kubectl apply
, I'd need to know the exact service YAML, update that and run the kubectl apply.