hashicorp / terraform-provider-kubernetes

Terraform Kubernetes provider
https://www.terraform.io/docs/providers/kubernetes/
Mozilla Public License 2.0
1.57k stars 966 forks source link

Cannot apply CRD and a CR using it in the same plan/apply due to SSA #1367

Open heschlie opened 3 years ago

heschlie commented 3 years ago

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: 0.14.11
Kubernetes Provider version: 2.4.1
Kubernetes version: EKS 1.17

Terraform configuration

There is a bit going on here, but essentially this is the output from the terraform_flux_provder, and through some HCL abuse I'm massaging it into the right format.

resource "kubernetes_manifest" "install" {
  for_each   = { for manifest in local.install_manifest : join("-", [manifest.kind, manifest.metadata.name]) => manifest }
  depends_on = [kubernetes_namespace.flux_system]
  manifest   = each.value
}

resource "kubernetes_manifest" "sync" {
  for_each   = { for manifest in local.sync_manifest : join("-", [manifest.kind, manifest.metadata.name]) => manifest }
  depends_on = [kubernetes_manifest.install]
  manifest   = each.value
}

Question

Essentially I am using the kubernetes_manifest resource, and am trying to:

  1. Deploy some custom resource definitions
  2. Deploy some custom resources using the above definitions

Upon doing this I am greeted with an error during the plan because the CRDs have not been created and SSA is not happy about it:

Acquiring state lock. This may take a few moments...

Error: Failed to determine GroupVersionResource for manifest

  on main.tf line 49, in resource "kubernetes_manifest" "sync":
  49: resource "kubernetes_manifest" "sync" {

no matches for kind "Kustomization" in group "kustomize.toolkit.fluxcd.io"

Error: Failed to determine GroupVersionResource for manifest

  on main.tf line 49, in resource "kubernetes_manifest" "sync":
  49: resource "kubernetes_manifest" "sync" {

no matches for kind "GitRepository" in group "source.toolkit.fluxcd.io"

Releasing state lock. This may take a few moments...
ERRO[0105] Hit multiple errors:
Hit multiple errors:
exit status 1

Is there a way to tell the provider that things are ok, and not try to plan this? It seems like a bug or required feature before this comes out of experimental, as asking for someone to first apply the CRDs, then add and apply the CRs doesn't seem like a valid long term solution.

kyschouv commented 3 years ago

I came to ask the same thing, as I ran into this trying to migrate a bunch of things from Helm charts to straight Terraform. There are a lot of similar issues on the old repo:

https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/247 https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/218 https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/235

This is going to make adoption of kubernetes_manifest very spotty. It'll basically require a separate Terraform step to plan and apply any CRDs ahead of using those CRDs. This will be further complicated by using something like Operator Lifecycle Manager, where CRDs are installed as part of the operator, and several operators might have dependent CRDs. For example, my current dependency chain looks like this:

cert-manager -> azure service operator -> grafana (uses a mysql database/user deployed through azure service operator).

cert-manager is installed through OLM, and created CRDs which must be used to generate resources ahead of installing azure service operator, which then creates CRDs that are used to create resources ahead of installing grafana. My plan was to utilize depends_on to handle the dependency chain, but because of the issue above (and in the linked old issues), that's not working. It would require separating each of those into a separate Terraform plan/apply step and then performing them in order, which will become cumbersome fast.

My current workaround is to place CRD usage into a local Helm chart and use that through helm_release. It's very hacky, but Terraform doesn't seem to evaluate those during the plan stage (at least not in a breaking way if the CRDs aren't installed yet), so it's a usable workaround. It would be a lot cleaner to be able to just use kubernetes_manifest though.

nbraun-wolf commented 3 years ago

So as far as I understand, it wouldn't even work when separating out these 20k lines of yaml from cert manager into individual files so they could be used as manifest. Because the moment we run apply, it does this server-side check and will report that the CRD does not exist. If we had at least an exclude flag, doing 2 times apply wouldnt be so bad, but without that the first apply will the run in the second apply again if we target only that crd the first time around.

I am using a Makefile now. Its not that nice but at least I can spin everything up with one command.

alxndr13 commented 2 years ago

For anyone having the same issue:

I wanted to install cert-manager using Helm and deploy a ClusterIssuer manifest in the same terraform apply step. The ClusterIssuer obviously depends on the cert-manager.io/v1 apiVersion, therefore the plan failed due to SSA.

I'm now using the kubectl provider for the ClusterIssuer Resource and combined with a depends_on meta argument on my cert-manager Helm release the plan and apply went through :)

dhirschfeld commented 2 years ago

I had hoped that the manifest resource being promoted out of alpha meant this problem had been resolved. Unfortunately, I've just hit it again with argocd. It would be great if someone from HashiCorp could comment on any plans to resolve this long-standing issue.

It might be a slightly different reason than the original https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/72 but the end result is the same - you can't use the manifest resource to deploy anything which depends on a CRD which greatly limits it's usefulness.

redwyre commented 2 years ago

This is a deep issue, because any time you want to reference a CRD, you need to split your terraform project into something that can be applied separately. eg using Traefik with CRDs means I need a plan to install Traefik first, then another to set up all my services.

I don't know what the solution would be here, I'd guess either terraform needs the ability to run a multi-stage plan, or have much better support for splitting a project into pieces.

zorgzerg commented 2 years ago

For anyone having the same issue:

I wanted to install cert-manager using Helm and deploy a ClusterIssuer manifest in the same terraform apply step. The ClusterIssuer obviously depends on the cert-manager.io/v1 apiVersion, therefore the plan failed due to SSA.

I'm now using the kubectl provider for the ClusterIssuer Resource and combined with a depends_on meta argument on my cert-manager Helm release the plan and apply went through :)

I have the same problem when deploying a cert manager. two resources: helm and Cluster Issuer, and WTF???

╷
│ Error: Failed to determine GroupVersionResource for manifest
│ 
│   with kubernetes_manifest.cluster_issuer,
│   on main.tf line 50, in resource "kubernetes_manifest" "cluster_issuer":
│   50: resource "kubernetes_manifest" "cluster_issuer" {
│ 
│ cannot select exact GV from REST mapper
╵
ERRO[0011] 1 error occurred:
        * exit status 1

We need fix! Very need.

jufa2401 commented 2 years ago

@alexsomesan sorry for pinging, but have you seen this issue?

EugenMayer commented 2 years ago

This issue alone renders the manifest resource entirely useless IMHO. There is no good way - for now - to apply CRD's an isolated way, so one can e.g. deploy CRDs of all the helm charts like traefik, chunkydata, minio-operator ... and a lot more (since CRDs are now used everywhere).

One has to either pull all the CRDs from the projects helm/kustomize charts manually somekind of CRD-only resources (cumbersome and fragile task, esp. maintaining it), then apply those in an extra, isolated terraform run and then start using the manifest resource to create CRs.

It really reminds me of the 'typescript typings' issue back in the days. Now we probably will see all the projects grind out the CRDs in custom, isolated helm charts/kustomize repositories. Then (maybe) a common standard will follow so that CRDs can be 'looked up' by kubectl automagically (and installed) .. and before that terraform kubernetes manifest will rely on this 2-step process of installing CRDs first.

This is by no means the fault of terraform / kubernetes manifest here, it is what happens when you introduce strong typings late into game (looks at kubernetes). ConfigMaps (no typing) was the way to go, then we got CRDs and now we have the issue of 'typings first, then declaration of instances'.

The same issue/mistake was done with typescript / typings - or maybe it's just the nature those huge things evolve - step by step, with a lot of painful intermediate steps in between.

That said, considering the long road until those CRDs can be installed an isolated way, this resource is basically a playing ground IMHO. I'am not sure it makes sense to maintain this huge (and nice) little thing here until then - but of course that is not up to me to judge.

We would love to use manifests so much! But i guess, we need to wait for the helm/kustomize/kubernetes projects to make up their minds about CRDs and 'typings' first.

alekc commented 2 years ago

One possible solution in case of argocd (for those who end up here through google) is to use terraform's target option.

This is how I am doing atm with 4 stage eks cluster bootstrapping

RuntimeArgoCDPlan:
  stage: runtime_argocd_plan
  extends:
    - .runtimeJob
  #  needs:
  #    - ClusterApply
  script:
    - gitlab-terraform init
    - gitlab-terraform validate
    - gitlab-terraform plan -target=module.runtime.helm_release.argocd
    - gitlab-terraform plan-json -target=module.runtime.helm_release.argocd
  artifacts:
    paths:
      - ${TF_ROOT}/plan.cache
    reports:
      terraform: ${TF_ROOT}/plan.json

RuntimeArgoCDApply:
  stage: runtime_argocd_apply
  extends:
    - .runtimeJob
  needs:
    - RuntimeArgoCDPlan
  script:
    - gitlab-terraform init
    - gitlab-terraform apply -target=module.runtime.helm_release.argocd
  when: manual

RuntimePlan:
  stage: runtime_plan
  extends:
    - .runtimeJob
  needs:
    - RuntimeArgoCDApply
  script:
    - gitlab-terraform init
    - gitlab-terraform validate
    - gitlab-terraform plan
    - gitlab-terraform plan-json
  artifacts:
    paths:
      - ${TF_ROOT}/plan.cache
    reports:
      terraform: ${TF_ROOT}/plan.json

RuntimeApply:
  stage: runtime_apply
  extends:
    - .runtimeJob
  needs:
    - RuntimePlan
  script:
    - gitlab-terraform init
    - gitlab-terraform apply
  when: manual

basically, on the first run you install only argocd helm chart, and then everything else is being deployed through argo manifests (which support unknown crd if they are in the helm chart).

Since this solution uses --target, it's not necessary split the project && state.

rlees85 commented 2 years ago

Just going to add the Kustomization provider doesn't have this problem either: https://registry.terraform.io/providers/kbst/kustomization/latest/docs

You can deploy the CRDs using Helm or whatever and then the CRs themselves by using a Kustomization that depends on the Helm release.

Would also like to be able to do this directly with a kubernetes_manifest. Would be so much cleaner.

ArchiFleKs commented 2 years ago

kubectl provider seems to work also for this use cas. I revert to it here instead of kubernetes_manifest as it was not able to deploy because CRDs are being created by the dependencies.

DaleyKD commented 2 years ago

I can't believe I'm going to have to add a totally separate provider just to do manifests.

primeos-work commented 1 year ago

Ok, so this is currently the 3rd most upvoted issue of this provider: https://github.com/hashicorp/terraform-provider-kubernetes/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc

The main perceived problem for this issue seems to be Terraform's "limitation" that everything needs to be planned in advance. However, technically, this doesn't seem to be a blocker for this issue. This should be proven by the fact that other providers do reportedly already enable this common use case (e.g., kbst/kustomization and gavinbunney/kubectl).

The actual limitation here seems to be that this providers wants to validate CRs during the planning phase (i.e., a design choice by this provider). This validation is obviously only possible if the CRD already exists. Therefore, I propose the following solution: We add a validate (better names are welcomed) argument for the kubernetes_manifest resource. This argument would default to true in which case the behavior remains exactly as is. However, if the user chooses to set validate = false, the provider would first check if the resource already exists and, if this isn't the case, the planning stage would simply conclude that the resource needs to be created and not do any further validation (i.e., the user is responsible for the correctness of the provided data and specifying the correct dependencies so that the resource always gets created after the CRD - this should be documented and otherwise it will simply fail during the apply phase). This should also be the only behavior that needs to be altered. If the CR already exits then the CRD also already has to exist. There might be some additional things to consider when implementing this (I'm not yet familiar with the code, unfortunately) but I don't think it should make this provider worse / cause additional limitations (especially since the current design already has quite a few problems/limitations like that destroying CRDs will also destroy the associated CRs (which this provider cannot track/handle properly), that the CRD could be destroyed/altered between the plan and apply phases, that the providers fails if the CR already exists on K8s but not in Terraform's state file, that dependency tracking between CRDs and CRs is left to the user, and there are likely even more issues/limitations).

What do you think of this proposal/idea? Does this seem like a good idea or did I miss anything relevant (like new issues this validation bypass could create, techical limitations, etc.)? It would be great if we could find a way to finally resolve this issue/limitation as this seems to be quite a major limitation of this provider and something that should be avoidable. Feedback is welcome :)

alexsomesan commented 1 year ago

Hi @primeos-work!

Thanks for taking the time to analyse this problem and make such detailed suggestions! In order to move this conversation further in a constructive way, please let me make some corrections to your assumptions about how the provider is implemented and why.

The reason why the provider needs access to the CRD during planning is not for validation purposes (in fact, validation is still a missing feature in the manifest resource), but rather because the provider needs to establish the structure (aka schema) of the resource so that Terraform can correctly persist the state for it. Terraform's state storage and the provider protocol are strongly typed. The entire structure of a resource, including the types of all its attributes, needs to be known to Terraform from the creation of the resource and Terraform will assume it stays constant once it's been defined.

The only way we can fully describe a CR resource to Terraform at creation time is by looking up it's schema details in its parent CRD. Unless we do this, all the structural information we would have about the CR is the set of attributes the user included in the configuration. This is not a full description of the resource, but rather a subset of it. The minute the user then updates the resource adding a new, previously unspecified attribute value, Terraform will fail and report inconsistent state structure. This will lead to corrupt state storage and will make any update operations on the CR resource impossible.

Because of the above reasons, we cannot avoid requesting the CRD structure during planning.

primeos-work commented 1 year ago

Hey @alexsomesan, thanks a lot for your fast and detailed response with the corrections! My bad, I already feared it couldn't be quite that simple, but let's try to find a new way then.

because the provider needs to establish the structure (aka schema) of the resource so that Terraform can correctly persist the state for it

Does this step necessarily have to happen during the planning stage or would it, e.g., be possible to use only the set of attributes the user included in the configuration for the plan, if the CRD lookup fails, and then perform the CRD lookup during the apply stage (when creating the CR)? That way, the final state would be correct and only the plan should change. For correctness, the apply could even fail if the CR already exists during the apply stage as it was manually/externally created since the planning (IIRC this is even already the current behavior). So from a theoretical standpoint this approach should mostly keep the correctness of applying the plan (the only issue I see is if the CRD changes between the plan and apply stages - I'll try to test the current behavior later - this might be acceptable though because the CRD is versioned and there shouldn't occur any breaking changes anyway). Would this be something that could be implemented or are there additional technical challenges/restrictions (e.g., the mentioned provider protocol - I'm not sure if the creation you mentioned refers to the creation of the full description/specification/structure during the planning or the creation of the actually object during the application of the plan)?

alexsomesan commented 1 year ago

@primeos-work, unfortunately it's still not that simple (as you realised yourself).

Let's first consider what is the point of having a "plan" step in Terraform. It is to present the user with an accurate preview of exactly what is going to be performed during "apply". This is to give the user a chance to vet the proposed changes and confirm or abort before anything is touched. Once a "plan" is confirmed by the user, "apply" will only perform exactly what was "promised" during the plan, not more not less. That means we establish a trust contract between Terraform and the user. This is a major differentiator of Terraform against other tools that might at first glance seem "simpler". I can appreciate that maybe this value isn't immediately recognised by new Terraform users, but that is a whole other conversation.

Because the plan is a "promise" that should not be broken during apply, Terraform enforces consistency checks on values and types between the plan and the apply result. One of these checks is for schema types of resources and attributes to match between plan and apply. Missing or extra attributes between plan and apply will fail this check and in turn the whole operation fails. This is why there is no other option but to fully construct and "promise" the type of the resource during planning. Doing it later will be rejected by Terraform's consistency checks.

Terraform isn't trying to be unreasonably pedantic here. There is a very solid point about keeping types consistent like this. Consider some resource depending on a kubernetes_manifest resource. You might want to interpolate the value of one of the manifest attributes into that other resource's configuration. During planning, Terraform will decide on the ordering of operations on these resources as well as the values displayed for attributes of dependent resource based on the availability of a value for the upstream manifest attributes. But if it doesn't know if there even is an attribute there, how can it make any promises about the structure of the dependent resource? Without type checking, it might expect a string value and instead get a set of booleans at apply time. This would render the whole promise of the plan use-less and thus the whole point (and effort) of having a planning mechanism in Terraform.

In conclusion, if the planning phase is of little value to one's use-case then there are alternative providers that decided to opt out of offering these guarantees (a few mentioned above in this thread). These providers just treat the resources as blocks of unstructured text (from Terraform's POV) that they just throw at the API. While they do provide a seemingly simpler first-use experience, they will soon fall short when it comes to combining the resources in more complex configurations.

There is no point in us creating yet another provider with the same limitations. Instead, this provider tries to offer as much of Terraform's value propositions as possible and that includes properly following the plan-apply workflow. The solution for the problem we are discussing here is best solved in Terraform itself, in order to preserve all the guarantees Terraform is designed to offer. We are having ongoing conversations with the Terraform team about approaches to it and they are experimenting with various solutions. Once they settle on an approach, we will be discussing it with the wider audience.

primeos-work commented 1 year ago

Thanks again for all of your insights @alexsomesan!

If I may share my thoughts (from a user's perspective) on Terraform's planning and state features / design decisions (a bit out of scope here but I feel like it's relevant - then again it shouldn't be something new): I do see the value of Terraform’s planning and it does indeed provide many great advantages (that I like/appreciate as a user). However, it obviously also comes with many drawbacks that can be quite a PITA (if I may say so), especially when first working with Terraform. But as you already mentioned it can be a huge benefit in the long run. Overall, I’m quite split on this design decision. I do find it nice/useful but its strictness also often seems to limit practicability (or at least comfort / ease of use; in addition, it would be nice if diverged state could be imported semi-automatically / detected better, etc.). Terraform’s strict and complete state tracking especially seems a bit out of place or at least redundant in cases like K8s where the complete, declarative state is already available (sure, it still offers advantages / additional features but I'd say in such cases it provides fewer advantages and more drawbacks than in other cases). This will also result in multiple issues/annoyances if the K8s cluster’s state diverges from what Terraform thinks it still is but this divergence should obviously be avoided by the user in the first place. And even with all of the nice planning features you can still do things like using the kubernetes_manifest resource to change a CRD in a breaking way (or even deleting it) which can cause lots of issues for CRs (other kubernetes_manifest resources) that depend on this CRD (e.g., causing Terraform to fail later in the apply operation or in the next plan/apply operation without any additional changes). This can rightfully be considered misuse but in a perfect world without misuse Terraform could be much simpler (e.g., I love typing but it's kinda only there to prevent misuse).

-> Anyway, what I’m trying to say is that it might be nice if Terraform could be a bit less strict in some places (in the sense of “perfect is the enemy of good”) but that’s obviously also a very dangerous thing to do and needs to be considered very carefully (to avoid making things worse / opening Pandora's box).

Regarding the motivation of the current behavior/design (dependencies, variable interpolation, promises, etc.):

But if it doesn't know if there even is an attribute there, how can it make any promises about the structure of the dependent resource?

In that case it shouldn’t (and I'd say that's fine). IMO the normal behavior of kubernetes_manifest should remain unchanged and there should either be an additional argument (like the previously proposed validate argument that needs a better name) or another resource for this special “mode”/hack where the CRD cannot be looked up. In that case the resource would either not output anything (which would be the easiest and should be fine for users) or it would only output the attributes specified in the Terraform configuration (which I wouldn’t suggest as it comes with a lot of challenges (having to infer the types and trying to validate them once the CRD is available) and one should be able to use Terraform variables, data sources, etc. instead). This would trade one feature for another but I think it should be fine for most if not all use cases here (the main use case that I see would be to depend on a default value from the CRD that isn’t part of the Terraform configuration but in practice there should always be pretty acceptable workarounds like making that value part of the Terraform configuration and setting that attribute explicitly).

This would render the whole promise of the plan use-less and thus the whole point (and effort) of having a planning mechanism in Terraform.

So with an implementation as described above this problem should be avoided (the plan wouldn't be complete but this is already the case as Terraform cannot know everything in advance anyway) and only some features of kubernetes_manifest would be limited. Overall, I’d still consider it a significant improvement as it should be better than the current alternatives (using unverified 3rd-party providers with less elegant approaches, using the -target option, splitting the Terraform configuration up into multiple root modules (so that the CRDs are created before the CRs), or using even worse hacks (I've seen some that likely shouldn't even be shared)).

In conclusion, if the planning phase is of little value to one's use-case then there are alternative providers that decided to opt out of offering these guarantees (a few mentioned above in this thread). These providers just treat the resources as blocks of unstructured text (from Terraform's POV) that they just throw at the API. While they do provide a seemingly simpler first-use experience, they will soon fall short when it comes to combining the resources in more complex configurations.

I agree with all of that. Tbh it’d be nice though to have a more official one (in terms of reviews/verifications, maintenance / community support, etc.) – but that is of course out-of-scope for this issue (and a challenge in general).

The solution for the problem we are discussing here is best solved in Terraform itself, in order to preserve all the guarantees Terraform is designed to offer. We are having ongoing conversations with the Terraform team about approaches to it and they are experimenting with various solutions. Once they settle on an approach, we will be discussing it with the wider audience.

That sounds very interesting! Thanks a lot for looking into this :) I’m excited to learn more about this so, if possible, please do share a link here once that discussion is public (unless it'll be discussed in this issue anyway).

I hope my thoughts/suggestions here (from a users’ perspective) do provide some value and that some parts of it might be considered. I wish I could be of more help but unfortunately my knowledge of Terraform, the code/implementation, design decisions, data structures, protocols, etc. is still by far too limited for more concrete and reasonable ideas.

ghost commented 1 year ago

In my case, I'm using the helm provider to install fluxcd and trying to make a kubernetes_manifest with a GitRepository object.

Simply adding a skip_kind_check = true would be enough to avoid this problem.

generalovmaksim commented 1 year ago

@protosam this option skip_kind_check don't not exist in kubernetes_manifest, it's in kubectl_manifest https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/resources/kubectl_manifest#validate_schema

tkeller-moxe commented 1 year ago

@generalovmaksim The hope would be to add one to kubernetes_manifest. Since there is currently not one. This has bitten me quite a few times as well.

owen26 commented 1 year ago

I think what Terraform does by checking the kind type during the planning stage still makes sense to me. However, I'm also a bit bothered by having to split my terraform workspace into 2 to have CertManager provisioned before all the CRD issuer resources depend on it.

Ideally, it would be great if the CRD definition and CRD implementation resources can sit inside the same Terraform workspace in harmony without sacrificing type safety check.

ghost commented 1 year ago

@generalovmaksim my comment is a suggestion to add an input of skip_kind_check to kubernetes_manifest.

kubernetes_manifest validates group, kind, version without a way to override the behavoir. This is a poor assumption about how to interact with Kubernetes clusters out of the box because Kubernetes is designed to be extended with custom resources.

Ideally, it would be great if the CRD definition and CRD implementation resources can sit inside the same Terraform workspace in harmony without sacrificing type safety check.

Part of the problem here is that the ideas behind how Terraform was build on a premise of being predictable, to make a calculation beforehand and ensure it reliable is built.

Kubernetes is fundamentally different from Terraform. The idea of describing a state and eventually reconciling that state flies directly against the fundamental assumptions that Terraform wants to uphold. Reconciling a state is not linear A->B calculations and it's going to be different depending on the controller for the resource in question.

If Terraform is to effectively have interoperability with Kubernetes, the provider must provide mechanisms to break predictability.

dimisjim commented 1 year ago

break predictability.

But that's the whole point of terraform: what you see in the plan, is what you get in the actual final apply, and when you want to verify that this is the case still, you plan again and expect no diff.

If there is a diff, it means that something changed in the remote resource state, and needs to either be aligned in code, or be planned and applied again.

If there is an unpredictable diff, then one can specify a lifecycle rule to ignore certain attributes. So if some cluster state shouldn't be controlled by terraform, but it should be reconciled only via its controller manager, then there should be a mechanism to provide this capability.

ghost commented 1 year ago

Feel like you missed the part where Kubernetes is fundamentally different. If you plan to use kubernetes for what it was built for, this provider's going to suck.

The helm provider is a bit better. You can have it kickstart your in-cluster resources.

These are competing ideas.

marcinkubica commented 1 year ago

Quite surprisingly this issue is labelled "question" only. Makes provider half-useable 👎

jeremybusk commented 1 year ago

This is really problematic. Vote it up to fix. I"m using kubernetes_manifest. Trying to create a module. Tempted just to use kubectl diff and small bash functions in pipeline but would prefer to use terraform for existing platform.

jeremybusk commented 1 year ago

Here is a quick code example if it is useful to anyone to throw into the main.sh that is using helm-diff plugin to run before your terraform code.

You could also install all your helm repos, releases in a different terraform folder using different backend and then cd into terraform folder that is creating the external-secrets based off your CRDS that were created by helm upgrade/install

#!/bin/bash
set -e

[[ $(helm repo list | grep ^external-secrets) ]] \
  || helm repo add external-secrets https://charts.external-secrets.io
[[ $(helm plugin list | grep ^diff) ]] \
  || helm plugin install https://github.com/databus23/helm-diff
helm diff version

kubectl create namespace external-secrets --dry-run=client -o yaml | kubectl apply -f -

helm_cmd(){
  helm $1 upgrade --install external-secrets \
    external-secrets/external-secrets \
    -n external-secrets \
    --set installCRDs=true
}

lines=$(helm_cmd diff)
if [[ $(echo -n "$lines" | wc -l) > 0 ]]; then
  echo "$lines" | grep "^+\|^-" || true
  echo "Helm changes detected."
  echo "Running helm upgrade in 5."; sleep 5
  helm_cmd
fi
sandijs-private commented 1 year ago

How about introducing delays and long running async processes in applying process? Terraform 2.0 Example1: Kubernetes Operator creates infrastructure on which next infrastructure branch depends. Then some refresh + apply can happen. Example2: Dedicated Servers hosting with their Provider adds 20 servers in a month. Then automatically they could get their planned infrastructure inside of them. In big clouds you get that immediately. In hosting centers - after purchase and installation happened. Terraform 2.0 is noSQL based, with a Server. With Rollbacks, Backups, Automated Tests...

EBCEEB commented 1 year ago

I've found a solution to have the only manifest, but it should be run twice:

resource "helm_release" "cert_manager" {
  name       = "cert-manager"
  chart      = "cert-manager"
  repository = "https://charts.jetstack.io"

  namespace        = "infra"
  ...
}

data "kubernetes_resources" "cert_manager" {
  api_version    = "apps/v1"
  kind           = "Deployment"
  namespace      = "infra"
  field_selector = "metadata.name=cert-manager"
}

resource "kubernetes_manifest" "clusterissuer_selfsigned" {
  count = length(data.kubernetes_resources.cert_manager.objects) > 0 ? 1 : 0
  ...

  depends_on = [helm_release.cert_manager]
}
moxli commented 1 year ago

@EBCEEB Thanks for trying to find a workaround!

Did you manage to get around this?

When trying it with the same setup (cert-manager + clusterissuer manifest) I get:

│ Error: Invalid count argument
│ 
│   on cert_manager.tf line 54, in resource "kubernetes_manifest" "cert-manager_clusterissuer_letsencrypt-prd":
│   54:   count = length(data.kubernetes_resources.cert_manager.objects) > 0 ? 1 : 0
│ 
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.
EBCEEB commented 1 year ago

@EBCEEB Thanks for trying to find a workaround!

Did you manage to get around this?

Sorry. I was having previously deployed k8s cluster, so I didn't face such issue.

On the clean infra terraform even can't configure the kubernetes provider:

│ cannot create discovery client: no client config

Looks like my solution is not a solution :-(

simonfelding commented 1 year ago

the solution is to write a helm chart for the CRDs

marksumm commented 1 year ago

@simonfelding Agreed. I recently migrated from the kubectl provider to local Helm charts.

Hronom commented 1 year ago

Somebody join that hashicorp company and team that responsible for terraform and lead the changes to handle this case.

After such issues you are literally lost trust in humanity. And this is critical, if aliens came up or AI when this issue can be one of a reason why they will not make any deal with humanity.

Humans let's be more responsive, hashicorp what do you think? Are you able to provide valid solution?

Hronom commented 1 year ago

To save somehow humanity, here is workaround in "ok" range: https://github.com/hashicorp/terraform-provider-kubernetes/issues/1380#issuecomment-962058148

Although for me I was using more fresh helm chart https://artifacthub.io/packages/helm/main/raw

I like this more from all workarounds that I found in many related issues...

marty-workpath commented 1 year ago

I ran into this issue when deploying the Grafana Agent Operator using a combination of Terraform and Helm. The issue I faced is that Helm needed to install the CRDs before I could reference them in the manifest provided by Grafana. Then I would get errors like no matches for kind "GrafanaAgent" in group "monitoring.grafana.com".

To solve this, I kept the resources that referenced the CRDs in a separate manifest and then applied the manifest using the kubectl_file_documents data source and kubernetes_manifest resource, referenced here. Make sure to add a depends_on in your kubernetes_manifest resource to ensure it runs the Helm install first.

marksumm commented 1 year ago

Perhaps slightly off-topic, but I thought I would share my experiences of using the kubectl provider and give the reasons why I migrated to using local Helm charts in place of it. It's worth noting that the situation is quite different when you are using Terraform to generate YAML than when using static YAML files...

First, I had to activate the wait option on my kubectl_manifest resources because terraform destroy for those resources was returning too early. This could lead to a situation where other resources required to support webhooks involved in the destruction of the implied Kubernetes objects were removed too soon. Not the end of the world.

Next, frequent "inconsistent final plan" errors from the provider forced me to activate the force_new option on the kubectl_manifest resources. After this change, I saw many cases where terraform apply apparently succeeded but the final state in Kubernetes was that several expected resources were missing. They were recreated after running terraform apply for a second time. So, the deployment contract was essentially broken.

I then noticed that issues in the kubectl provider GitHub repo were starting to pile up and there was no update since last year. This situation is ongoing.

One disadvantage of the local Helm chart approach is that Terraform has no visibility of the Kubernetes changes implied by any Helm chart. So, it's possible that Kubernetes changes made outside of Terraform will not cause Terraform diffs.

kkopachev commented 1 year ago

@primeos-work, unfortunately it's still not that simple (as you realised yourself).

Would you consider a solution where use supplies CRD (or url of) in an attribute, so kubernetes_manifest resource can infer schema from spec from it, instead of consulting with running cluster?

Let's say I want to provision ClusterSecretStore kind manifest, that would look like:


resource "kubernetes_manifest" "cluster-secret-store" {
  # This is used by provider to fetch resource schema, avoiding getting it from the cluster
  crd_url = "https://raw.githubusercontent.com/external-secrets/external-secrets/main/config/crds/bases/external-secrets.io_secretstores.yaml"

  manifest = {
    apiVersion = "external-secrets.io/v1beta1"
    kind       = "ClusterSecretStore"
    metadata   = {
      name = "aws-secrets-manager"
    }
    spec = {
      provider = {
        aws = {
          service = "SecretsManager"
          region  = "us-east-1"
        }
      }
    }
  }
}
emaung commented 1 year ago

I ran into this issue when deploying the Grafana Agent Operator using a combination of Terraform and Helm. The issue I faced is that Helm needed to install the CRDs before I could reference them in the manifest provided by Grafana. Then I would get errors like no matches for kind "GrafanaAgent" in group "monitoring.grafana.com".

To solve this, I kept the resources that referenced the CRDs in a separate manifest and then applied the manifest using the kubectl_file_documents data source and kubernetes_manifest resource, referenced here. Make sure to add a depends_on in your kubernetes_manifest resource to ensure it runs the Helm install first.

Thank you for mentioning this. I was stuck and this solves my problem.

ashtonian commented 1 year ago

rancher(cert-manager) and datadog custom metrics are both impacted by this as well

maxkokocom commented 1 year ago

If the hashicorp team does not want to solve it consider removing this resource from a provider as it's not really useable for most kubernetes use cases. CRDs are nowadays almost everywhere, it's not esoteric occasional use case as it used to be years ago. At least add a big CAPS warning at the top of the documentation it's not useable for most use cases and you don't plan to make it so.

I don't mind enforcing terraform philosophy, but in this case it's against kubernetes philosophy so I am a bit angry I wasted time in this rabbit hole instead of going with hacky bash in null_resource or gavinbunney provider which I don't like but at least those always worked with CRDs.

framegrace commented 1 year ago

The target workaround works perfect. Thanks. I really think it is an acceptable workaround for this cases. I understand the problem, and that as it is now will take a lot of work to refactor. But I really think the schema is not imprescindible in the case of non-existing resources. The promise to "create a new object" when applying should be enough, right. The information of what values will be used on plan time is not that usefull in the case of a non existing object.

EmilyShepherd commented 1 year ago

My typical workaround is to just make a local helm chart directory with the CR I need to install inside of that - then I install it via terraform with a helm_release with the chart pointing to the local directory.

I'd much rather the kubernetes provider could deal with this in a more sane way, but this does the job in the least horrible way for me.

framctr commented 11 months ago

Same issue as #1583 .

wazoo commented 9 months ago

I hit this today when trying to replace using the kubectl provider in this example from the fluxcd documentation. The target workaround does work but that is going to introduce a manual step into my cluster vending machine workflow that I am building so that is less than ideal.

I like the look of this idea but I would really like to have the option to just have the resource not validate during the plan because in this case the CRs are created as a part of the bootstrap resource in the fluxcd provider so by using a depends_on terraform doesn't create those resources until the bootstrap is done anyway so they succeed when it applies them.

This is an example:

resource "flux_bootstrap_git" "this" {
  path                    = "cluster/flux-system"
  disable_secret_creation = true
  version                 = var.flux_version
  secret_name             = local.flux_operator_secret_name
}
resource "kubernetes_manifest" "fleet_config_gitsource" {
  depends_on      = [flux_bootstrap_git.this]
  manifest        = yamldecode(local.fleet_config_gitsource_manifest)
}
thechubbypanda commented 8 months ago

May have been said already but another potential way of solving this would be to:

Does this make sense?

andrewhharmon commented 5 months ago

any update on this issue?

jmgilman commented 4 months ago

The kubectl provider has been abandoned and there's an existing issue where non-namespaced resources fail to get created. This prevents one of the primary workarounds to the issue documented here.

Is it possible to get an update on if this is planning on being addressed?

froblesmartin commented 4 months ago

The kubectl provider has been abandoned and there's an existing issue where non-namespaced resources fail to get created. This prevents one of the primary workarounds to the issue documented here.

Is it possible to get an update on if this is planning on being addressed?

@jmgilman check https://github.com/alekc/terraform-provider-kubectl, I haven't used it, but someone mentioned it in an issue of the other repository

YarekTyshchenko commented 2 months ago

Are there any open PRs that adress this issue? It looks like the validation should treat missing CRD as if the resource itself was missing, as in k8s its not possible to have a resource created with a missing CRD (iirc).