Open zapman449 opened 4 years ago
@zapman449 @memory @vide @blawlor @aareet upvote my PR ^^^ so that we can get some traction on this. I got patch functionality working for the kubernetes_manifest
resource.
This is how we can patch with the patch PR. First I ran apply with the terraform below. Then I removed the terraform.tfstate
file to simulate an already existing configmap. Then I ran another apply and it patched the resource!
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "docker-desktop"
}
resource "kubernetes_manifest" "test-configmap" {
manifest = yamldecode(
<<-EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::111111111111:role/DemoEKS-NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111111111111:role/TeamRole
username: TeamRole
groups:
- system:masters
mapUsers: |
- userarn: arn:aws:iam::111111111111:user/sukumar-test-test
username: sukumar
groups:
- system:masters
EOT
)
field_manager {
force_conflicts = true
}
}
Note: The kubernetes_manifest
will only patch already existing resources when field_manager.force_conflicts = true
.
For those of you whose use-case is patching labels, annotations, and ConfigMap entries v2.10.0 of the provider brought support for doing this using Server-Side Apply & Field Manager in the following resources:
Other use-cases on our radar for resources where Terraform will partially manage a Kubernetes resource:
If you have another use-case please share it.
For some context on why we haven't added a completely generic patch resource see this discussion.
@jrhouston it would be great if we could update/remove an annotation if possible. In order to run a Fargate only EKS cluster, you have to remove an annotation from the CoreDNS deployment template spec using:
kubectl patch deployment coredns \
-n kube-system \
--type json \
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
This deployment is created automatically by the EKS service hence why we have to modify after the fact.
See more here https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-coredns and there is a small convo started https://github.com/aws-samples/aws-eks-accelerator-for-terraform/issues/394
Did you try https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/annotations and set the value to null? It should remove the annotation.
Did you try registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/annotations and set the value to null? It should remove the annotation.
@jkroepke yes, that was the resource I was using but I couldn't figure out how to get the resource to target the spec.template.metadata
annotations.
Using it like below just lets me access the deployments annotations, but I need to modify the the pod specs annotations:
resource "kubernetes_annotations" "coredns" {
api_version = "apps/v1"
kind = "Deployment"
metadata {
name = "coredns"
namespace = "kubesystem"
}
annotations = {}
}
According to the annotations
document:
This resource uses field management and server-side apply to manage only the annotations that are defined in the Terraform configuration.
The annotation is not managed in Terraform configuration, not sure if the module is applicable to EKS managed addon like coreDNS.
Using it like below just lets me access the deployments annotations, but I need to modify the the pod specs annotations
@bryantbiggs Yeah this isn't going to work because the resource is an analog to kubectl annotate
– it doesn't patch deeper nested metadata just the top level. I think that's something we're going to need to add a separate resource for, thanks for surfacing this use-case.
From my work in this PR
I found that the kuberenetes_manifest
resource was erroring out on patching already existing resources because of a condition in the apply.go
code. Commenting out this if statement would seemingly add support for patching already existing resources. I was pleasantly surprised that it worked for the tests that I ran. My understanding is that this worked because the kuberentes_manifest
resource refreshes the k8s resource during the plan phase which essentially acts as an import.
There are plenty of examples where bringing already existing k8s resource under control of this terraform provider would be extremely useful, aws-auth
, coredns
, etc. So I propose we add a new experimental kuberenetes_patch
resource, as the OP suggested. This code would be trivial to maintain as it would be reusingkuberneters_manifest
resource code... except for the already exists condition.
Normally I would protest breaking the fundamentals of terraform CRUD, but since there is an analog in kubectl patch
; it will be intuitive for users to understand how this special resource would behave. Since it will be a dedicated resource we can thoroughly document the special lifecycle behavior to avoid any confusion.
@jrhouston I know the maintainers have put a lot of thought into this. For this reason, the implementation i described above feels too good to be true. I also recognize that I am probably overlooking many details/side-effects. Nonetheless, it worked for my simple test and I would be happy to make a new PR for this experimental resource.
If you have another use-case please share it.
Our use-case is actually applying an affinity rule to the kube-dns
Deployment:
kubectl patch deployment kube-dns --namespace kube-system --patch "$(cat <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
spec:
template:
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: workload
operator: In
values:
- stable
EOF
)"
Not sure if its related to patch, but being able to do the equivalent of kubectl set env ...
would be quite helpful as well
Here's a more fleshed out workaround that I came up with based on what robertb724 had suggested:
First, create a service account in the kube-system
namespace that our kubernetes_job
resources will use like the following:
resource "kubernetes_service_account" "core_dns_fixer" {
metadata {
name = "core-dns-fixer"
namespace = "kube-system"
}
}
Then create a role with permissions to get and patch deployment.apps/coredns
in the kube-system
namespace like the following:
resource "kubernetes_role" "core_dns_fixer" {
metadata {
name = "core-dns-fixer"
namespace = "kube-system"
}
rule {
api_groups = ["apps"]
resources = ["deployments"]
resource_names = ["coredns"]
verbs = ["get", "patch"]
}
}
Then bind the service account to the role like the following:
resource "kubernetes_role_binding" "core_dns_fixer" {
metadata {
name = "core-dns-fixer"
namespace = "kube-system"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.core_dns_fixer.metadata[0].name
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.core_dns_fixer.metadata[0].name
namespace = "kube-system"
}
}
Once these RBAC bits are in place, create kubernetes jobs that will utilize the service account to patch the coredns deployment and restart it, like the following:
resource "kubernetes_job" "patch_core_dns" {
depends_on = [
# assumes the `kube-system` fargate profile is created in the same code
# edit to match yours, or comment out if it's being created elsewhere
aws_eks_fargate_profile.main["kube-system"],
kubernetes_role_binding.core_dns_fixer
]
metadata {
name = "patch-core-dns"
namespace = "kube-system"
}
spec {
template {
metadata {}
spec {
service_account_name = kubernetes_service_account.core_dns_fixer.metadata[0].name
container {
name = "patch-core-dns"
image = "bitnami/kubectl:latest"
command = ["/bin/sh", "-c", "kubectl patch deployments.app/coredns -n kube-system --type json -p='[{\"op\": \"remove\", \"path\": \"/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type\"}]'"]
}
restart_policy = "Never"
}
}
}
wait_for_completion = true
timeouts {
create = "5m"
}
}
resource "kubernetes_job" "restart_core_dns" {
depends_on = [
kubernetes_job.patch_core_dns
]
metadata {
name = "restart-core-dns"
namespace = "kube-system"
}
spec {
template {
metadata {}
spec {
service_account_name = kubernetes_service_account.core_dns_fixer.metadata[0].name
container {
name = "restart-core-dns"
image = "bitnami/kubectl:latest"
command = ["/bin/sh", "-c", "kubectl rollout restart deployments.app/coredns -n kube-system"]
}
restart_policy = "Never"
}
}
}
wait_for_completion = true
timeouts {
create = "5m"
}
}
If you want to run this along with the same terraform code as your eks cluster is created in, you can configure the Kubernetes provider with something like the following:
data "aws_eks_cluster" "main" {
name = aws_eks_cluster.main.name
}
data "aws_eks_cluster_auth" "main" {
name = aws_eks_cluster.main.name
}
provider "kubernetes" {
host = data.aws_eks_cluster.main.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.main.token
}
Note: it assumes that you've named your aws_eks_cluster
resource as main
. Also refer to the comments in the explicit depends_on
in the kubernetes_job.patch_core_dns
resource. The fargate profile for the kube-system
namespace needs to be created first or the job will not run.
That's very creative @cmanfre4
Expanding on @cmanfre4's answer, we could probably simplify it to a single job with this command:
["/bin/sh", "-c", "compute_type=$(kubectl get deployments.app/coredns -n kube-system -o jsonpath='{.spec.template.metadata.annotations.eks\\.amazonaws\\.com/compute-type}'); [ ! -z \"$compute_type\" ] && kubectl patch deployments.app/coredns -n kube-system --type json -p='[{\"op\":\"remove\", \"path\": \"/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type\"}]' && kubectl rollout restart deployments.app/coredns -n kube-system"]
It does 2 things:
Just FYI - if you patch the CoreDNS on EKS, you'll want to eject from the EKS API managing the CoreDNS deployment using the preserve = true
. If not, the next time the EKS API updates the addon, it will remove your patch and cause your DNS to fail
Thanks @jkroepke!
I was able to remove the default storage class from an EKS cluster with https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/annotations.
resource "kubernetes_annotations" "default-storageclass" {
api_version = "storage.k8s.io/v1"
kind = "StorageClass"
force = "true"
metadata {
name = "gp2"
}
annotations = {
"storageclass.kubernetes.io/is-default-class" = "false"
}
}
Thanks @jkroepke!
I was able to remove the default storage class from an EKS cluster with https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/annotations.
resource "kubernetes_annotations" "default-storageclass" { api_version = "storage.k8s.io/v1" kind = "StorageClass" force = "true" metadata { name = "gp2" } annotations = { "storageclass.kubernetes.io/is-default-class" = "false" } }
that's very interesting way of doing this
I have a very simple workaround in the form of a bootstrap script which deletes the relevant default resources:
#!/usr/bin/env bash
set -euo pipefail
while test $# -gt 0; do
case "$1" in
-h | --help)
echo " "
echo "options:"
echo "-h, --help show brief help"
echo "--context specify kube contxt"
exit 0
;;
--context)
shift
if test $# -gt 0; then
context=$1
else
echo "no kube context specified"
exit 1
fi
shift
;;
*)
break
;;
esac
done
for kind in daemonset clusterRole clusterRoleBinding serviceAccount; do
echo "deleting $kind/aws-node"
kubectl --context "$context" --namespace kube-system delete $kind aws-node
done
for kind in customResourceDefinition; do
echo "deleting $kind/eniconfigs.crd.k8s.amazonaws.com"
kubectl --context "$context" --namespace kube-system delete $kind eniconfigs.crd.k8s.amazonaws.com
done
for kind in daemonset serviceAccount; do
echo "deleting $kind/kube-proxy"
kubectl --context "$context" --namespace kube-system delete $kind kube-proxy
done
for kind in configMap; do
echo "deleting $kind/kube-proxy-config"
kubectl --context "$context" --namespace kube-system delete $kind kube-proxy-config
done
for kind in deployment serviceAccount configMap; do
echo "deleting $kind/coredns"
kubectl --context "$context" --namespace kube-system delete $kind coredns
done
for kind in service; do
echo "deleting $kind/kube-dns"
kubectl --context "$context" --namespace kube-system delete $kind kube-dns
done
for kind in storageclass; do
echo "deleting $kind/gp2"
kubectl --context "$context" delete $kind gp2
done
I've got a similar requirement to update the ArgoCD password, and this worked for me
resource "null_resource" "argocd_update_pass" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
kubectl patch secret -n argocd argocd-secret -p '{"stringData": { "admin.password": "'$(htpasswd -bnBC 10 "" ${data.azurerm_key_vault_secret.argocd-password.value} | tr -d ':\n')'"}}' --kubeconfig ./temp/kube-config.yaml;
EOT
}
depends_on = [
helm_release.argocd,
local_file.kube_config
]
}
resource "local_file" "kube_config" {
content = azurerm_kubernetes_cluster.aks.kube_config_raw
filename = "${path.module}/temp/kube-config.yaml"
}
For those of you whose use-case is patching labels, annotations, and ConfigMap entries v2.10.0 of the provider brought support for doing this using Server-Side Apply & Field Manager in the following resources:
Other use-cases on our radar for resources where Terraform will partially manage a Kubernetes resource:
- Adding container environment variables
- Setting taints and tolerations
If you have another use-case please share it.
For some context on why we haven't added a completely generic patch resource see this discussion.
Thank you for this! I think this resolves a few use cases around annotations, labels and config maps that people have in this thread.
I think the top most wanted use cases missing are:
I believe looking at this thread these are the top priority ones to do first. It was also mentioned, but seems to be less common, the following use-cases:
I wonder if it would not be better to close this issue and open specific focused issues for these 4 use-cases.
Keep up the good work! Thanks.
If you have another use-case please share it.
@jrhouston We have another usecase. We're running on GKE with Calico enabled, this gives us a lot of readiness/liveness probe failures because the timeout is set to 1s. This is fixed in Calico (see https://github.com/projectcalico/calico/issues/5122#issuecomment-982865898) but the version of Calico that includes this fix isn't available on (stable) GKE yet. So we want to apply a patch to increase the timeout to match the value as set by newer Calico versions.
Also whilst I understand the desire of Terraform to keep it simple with regards to the implementation, conceptually matching kubectl will probably be simpler for most users to understand and there'd be no need for a dozen or so specific resources.
Here's a use case: I'm using EKS created by terraform terraform-aws-modules/terraform-aws-eks module I have different types of self managed node groups in my cluster, some small ec2 called "admins" handling system pods (coredns, autoscaler, alb constroller, ...) and some large ec2 called "applications" that handle my business applications I'm looking to automatically update the coredns deployment, created by EKS, so it's nodeSelector target my admins pods
I would love some kind of _kubernetes_nodeselector to do this patch, instead of having to workaround from a bash command or manually importing coredns after my eks creation
This is an issue for us as well since we frequently do work in AWS EKS where other users need to be added to aws-auth configmap, but this is not currently possible without external dependencies (kubectl).
On top of this, since release 18 of terraform-aws-modules/terraform-aws-eks,
aws-auth
isn't managed by the module anymore, most of the workarounds are based on exec/kubectl which is not something everyone can do.
Well, parameters to the module like manage_aws_auth_configmap suggest otherwise...
Thanks @cmanfre4 for the tip. I repurposed your solution to replace my default EKS gp2 storage class (which is unencrypted by default.) I also added a variable cluster_bootstrap
so that the job only needs to run the first time, while the replacement gp2
storage class is still managed by terraform:
terraform apply -var='cluster_bootstrap=true'
resource "kubernetes_service_account" "replace_storage_class_gp2" {
metadata {
name = "replace-storage-class-gp2"
namespace = "kube-system"
}
}
resource "kubernetes_cluster_role" "replace_storage_class_gp2" {
metadata {
name = "replace-storage-class-gp2"
}
rule {
api_groups = ["storage.k8s.io" ]
resources = ["storageclasses"]
resource_names = ["gp2"]
verbs = ["get", "delete"]
}
}
resource "kubernetes_cluster_role_binding" "replace_storage_class_gp2" {
metadata {
name = "replace-storage-class-gp2"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = kubernetes_cluster_role.replace_storage_class_gp2.metadata[0].name
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.replace_storage_class_gp2.metadata[0].name
namespace = "kube-system"
}
}
resource "kubernetes_job" "replace_storage_class_gp2" {
count = var.cluster_bootstrap ? 1 : 0
depends_on = [
kubernetes_cluster_role_binding.replace_storage_class_gp2
]
metadata {
name = "replace-storage-class-gp2"
namespace = "kube-system"
}
spec {
template {
metadata {}
spec {
service_account_name = kubernetes_service_account.replace_storage_class_gp2.metadata[0].name
container {
name = "replace-storage-class-gp2"
image = "bitnami/kubectl:latest"
command = ["/bin/sh", "-c", "kubectl delete storageclass gp2"]
}
restart_policy = "Never"
}
}
}
wait_for_completion = true
timeouts {
create = "5m"
}
}
resource "kubernetes_storage_class" "gp2" {
metadata {
name = "gp2"
}
storage_provisioner = "kubernetes.io/aws-ebs"
reclaim_policy = "Delete"
parameters = {
encrypted = "true"
fsType = "ext4"
type = "gp2"
}
depends_on = [
kubernetes_job.replace_storage_class_gp2
]
}
@bryantbiggs @michelzanini checkout version 2.15.0 released two days ago it now contains the kubernetes_env resource which can be used exactly for your use case, i've successfully tested this updating the AWS CNI plugin (daemonset)
Hi. I’d like to patch imagePullSecrets into a ServiceAccount.
Hi, I would like to patch a meshconfig and add ingressGateway.
Would very much appreciate the ability to patch
. When bootstrapping Red Hat OpenShift clusters, there are a large number of Day 1 configuration elements - authn / authz, storage, registry configs etc - where the workflow revolves around patching existing Cluster Resources into the state required.
Hello @ArieLevs
How can I use kubernetes_env
to do something like this?
kubectl patch deployment coredns \
-n kube-system \
--type json \
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
I looked at the docs but couldn't get it working. Would you be able to kindly provide an example?
@alicancakil your specific example can be solved by addon's optional configuration. See https://aws.amazon.com/blogs/containers/amazon-eks-add-ons-advanced-configuration/. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_addon already supports passing configuration values to addon.
@alicancakil the kubernetes_env
will only add environment values to a supported api resource.
i'm not sure the provider yet supports annotations removals (classic patch) but maybe you can use the kubernetes_annotations resource, so maybe something like that can help you
resource "kubernetes_annotations" "coredns" {
api_version = "apps/v1"
kind = "Deployment"
metadata {
name = "coredns"
namespace = "kube-system"
}
# These annotations will be applied to the Pods created by the Deployment
template_annotations = {
"eks.amazonaws.com/compute-type" = ""
}
force = true
}
this will remove the value of eks.amazonaws.com/compute-type
to create a fully serverless EKS Fargate based cluster, you only need to use the addon configuration like @z0rc mentioned. Here is an example https://github.com/clowdhaus/eks-reference-architecture/blob/f37390db1b38d154979cc1aeb4d72ab53929e847/serverless/eks.tf#L13-L15
We have another use case
When setting up the GKE identity service for enabling OIDC authentication on the k8s API, the setup instructions require you to edit a pre-existing ClientConfig
resource (a CRD provided by GKE) and fill in a bunch of fields. There does not appear to be any way to configure this using terraform other than the null_resource
hack.
We recently merged and released the ability to patch initContainers
with #2067, let us know if there are any issues when using the new patch attribute.
Another use case would be to be able to patch an existing resource created by an operator. For example, if I deploy rancher managed Prometheus and then I want to change the configuration of the Prometheus
resource.
To add another use case to the list: Patch a priority class name on deployments/statefulsets
Another use case, edit configurations of EKS provided kube-proxy via patching configmap (not via eks addon)
Yet another use case: I'd like to be able to patch the default
storageclass on an AKS cluster to add tags to the created volumes. That would require adding
parameters:
tags: some-tag=some-value
Patching would be preferable to creating new storageclasses, as there are already 7 by default.
I created a resource to patch daemonset in my provider: https://registry.terraform.io/providers/littlejo/cilium/latest/docs/resources/kubeproxy_free
if you bump into this issue looking to fix coredns, you can now configure the addon to use Fargate using computeType
.
resource "aws_eks_addon" "coredns_amazon_eks_addon" {
cluster_name = var.cluster_name
addon_name = "coredns"
addon_version = "v1.11.3-eksbuild.1"
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "PRESERVE"
configuration_values = jsonencode({
computeType = "Fargate"
# @see https://docs.aws.amazon.com/eks/latest/userguide/coredns-autoscaling.html#:r2a:
autoScaling = {
enabled = true,
minReplicas = var.coredns_min_replicas,
maxReplicas = var.coredns_max_replicas,
}
})
}
Terraform Version
Terraform v0.12.18
Affected Resource(s)
n/a (request for new resource)
In AWS EKS, clusters come "pre-configured" with several things running in the
kube-system
namespace. We need to patch those pre-configured things, while retaining any "upstream" changes which happen to be made. (for example: set HTTP_PROXY variables)kubectl provides the
patch
keyword to handle this use-case.The kubernetes provider for terraform should do the same.
Proposed example (this would add the
proxy-environment-variables
ConfigMap to the existingenvFrom
list which already containsaws-node-environment-variable-additions
for the container namedaws-node
):