hashicorp / terraform-provider-kubernetes

Terraform Kubernetes provider
https://www.terraform.io/docs/providers/kubernetes/
Mozilla Public License 2.0
1.59k stars 969 forks source link

Kubernetes provider does not respect data when kubernetes_manifest is used #1391

Open okgolove opened 3 years ago

okgolove commented 3 years ago

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.0.5
Kubernetes provider version: v2.4.1
Kubernetes version: 1.20.8-gke.900

Affected Resource(s)

Terraform Configuration Files

data "google_client_config" "this" {}

data "google_container_cluster" "this" {
  name     = "my-cluster"
  location = "europe-west2"
  project  = "my-project"
}

provider "kubernetes" {
  token                  = data.google_client_config.this.access_token
  host                   = data.google_container_cluster.this.endpoint
  cluster_ca_certificate = base64decode(data.google_container_cluster.this.master_auth.0.cluster_ca_certificate)

  experiments {
    manifest_resource = true
  }
}

resource "kubernetes_manifest" "test-crd" {
  manifest = {
    apiVersion = "apiextensions.k8s.io/v1"
    kind       = "CustomResourceDefinition"

    metadata = {
      name = "testcrds.hashicorp.com"
    }

    spec = {
      group = "hashicorp.com"

      names = {
        kind   = "TestCrd"
        plural = "testcrds"
      }

      scope = "Namespaced"

      versions = [{
        name    = "v1"
        served  = true
        storage = true
        schema = {
          openAPIV3Schema = {
            type = "object"
            properties = {
              data = {
                type = "string"
              }
              refs = {
                type = "number"
              }
            }
          }
        }
      }]
    }
  }
}

Debug Output

Debug log contains lots of private information. I'd prefer to not to post it.

Steps to Reproduce

  1. terraform apply

Expected Behavior

Plan is presented, after apply CRD is created successfully

Actual Behavior

Error:

Invalid attribute in provider configuration

  with provider["registry.terraform.io/hashicorp/kubernetes"],
  on main.tf line 9, in provider "kubernetes":
   9: provider "kubernetes" {

'host' is not a valid URL

╷
│ Error: Failed to construct REST client
│
│   with kubernetes_manifest.test-crd,
│   on main.tf line 19, in resource "kubernetes_manifest" "test-crd":
│   19: resource "kubernetes_manifest" "test-crd" {
│
│ cannot create REST client: no client config

Important Factoids

Community Note

Jasstkn commented 3 years ago

Hi. Same issue

sagikazarmark commented 3 years ago

It doesn't work with depends_on either.

ashtonian commented 3 years ago

started running into the following error which I think is related on destroy, didn't work with tostring() either:

│ Error: Provider configuration: failed to assert type of element in 'args' value
│
│   with module.services_tools.provider["registry.terraform.io/hashicorp/kubernetes"],
│   on ../../modules/services_tools/versions.tf line 23, in provider "kubernetes":
│   23: provider "kubernetes" {
// this is required in order to pass information to the underlying kube provider for the above eks see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1280
provider "kubernetes" {
  experiments {
    manifest_resource = true
  }
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
    command     = "aws"
  }
}
nyurik commented 3 years ago

Same error when using GCP and applying multiple manifests from the same file -- │ Error: Failed to construct REST client:

data "google_client_config" "current" {}

data "google_container_cluster" "cluster" {
  name     = var.cluster_name
  location = var.cluster_location
}

provider "kubernetes" {
  host = data.google_container_cluster.cluster.endpoint

  client_certificate     = base64decode(data.google_container_cluster.cluster.master_auth.0.client_certificate)
  client_key             = base64decode(data.google_container_cluster.cluster.master_auth.0.client_key)
  cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)

  token = data.google_client_config.current.access_token

  experiments {
    manifest_resource = true
  }
}

resource "kubernetes_manifest" "default" {
  # Create a map { "kind--name" => yaml_doc } from the multi-document yaml text.
  # Each element is a separate kubernetes resource.
  # Must use \n---\n to avoid splitting on strings and comments containing "---".
  # YAML allows "---" to be the first and last line of a file, so make sure
  # raw yaml begins and ends with a newline.
  # The "---" can be followed by spaces, so need to remove those too.
  # Skip blocks that are empty or comments-only in case yaml began with a comment before "---".
  for_each = {
    for value in [
      for yaml in split(
        "\n---\n",
        "\n${replace(file("manifests.yaml"), "/(?m)^---[[:blank:]]+$/", "---")}\n"
      ) :
      yamldecode(yaml)
      if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
    ] : "${value["kind"]}--${value["metadata"]["name"]}" => value
  }
  manifest = each.value
}
rvillane commented 2 years ago

When using kubernetes provider v2.6.1 and terraform v1.x.x, the error shown is the following:

Invalid attribute in provider configuration

  with provider["registry.terraform.io/hashicorp/kubernetes"],
  on provider.tf line 24, in provider "kubernetes":
  24: provider "kubernetes" {

'host' is not a valid URL
tclift commented 2 years ago

The error:

'host' is not a valid URL

is likely because:

host = data.google_container_cluster.this.endpoint

should have been (as per #1468):

host = "https://${data.google_container_cluster.this.endpoint}"

but:

cannot create REST client: no client config

is happening for me despite host being a URL, and I'm not sure where to look next to diagnose.

Edit: Seen in logs (TF_LOG=TRACE terraform apply):

2021-11-01T17:16:22.257+1100 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: 2021-11-01T17:16:22.256+1100 [ERROR] [Configure]: Failed to load config:="&{0xc001212820 0xc0007e6fc0 <nil> 0xc000176c00 {0 0} 0xc001211f30}"

so it looks like this code path is being taken. I noted the comment:

// this is a terrible fix for if the configuration is a calculated value

so perhaps clientConfig is expected to be populated elsewhere, later on...

tclift commented 2 years ago

This may have been evident from the issue title, but those looking for a workaround can remove dynamic/data values from the provider configuration.

E.g., given a suitably configured kubectl environment, replacing:

provider "kubernetes" {
  host                   = "https://${data.google_container_cluster.default.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.default.master_auth.0.cluster_ca_certificate)
}

with:

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "gke_my-project_my-region_my-cluster"
}
ismailyenigul commented 2 years ago

Getting Failed to construct REST client when I try to deploy argocd app on non-existent EKS cluster. But it works fine on running EKS cluster.

│ Error: Failed to construct REST client
│ 
│   with module.argocd_application_gitops.kubernetes_manifest.argo_application,
│   on .terraform/modules/argocd_application_gitops/main.tf line 1, in resource "kubernetes_manifest" "argo_application":
│    1: resource "kubernetes_manifest" "argo_application" {

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

provider "helm" {

  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}

module "eks" {

...
}

module "argocd_application_gitops" {

  depends_on = [module.vpc, module.eks, module.eks_services]
  source     = "project-octal/argocd-application/kubernetes"
  version    = "2.0.0"

  argocd_namespace    = var.argocd_k8s_namespace
  destination_server  = "https://kubernetes.default.svc"
  project             = var.argocd_project_name
  name                = "gitops"
  namespace           = "myns"
  repo_url            = var.argocd_root_gitops_url
  path                = "Chart"
  chart               = ""
  target_revision     = "master"
  automated_self_heal = true
  automated_prune     = true
}
vasylenko commented 2 years ago

Apparently, the helm provider (when configured in the same way) does not have this issue. So I can have the helm resources described in TF when the cluster does not exist. But I can't have the k8s manifest TF code in the project until the cluster is created.

It would be great to see the issue with Failed to construct REST client for the Kubernetes provider solved soon! 🤞

barantomasz83 commented 2 years ago

Same problem with cert-manager:

Error: Failed to construct REST client │ │ with module.eks_cluster_first.module.cert_manager.kubernetes_manifest.cluster_issuer_selfsigned, │ on modules\cert_manager\cert_manager.tf line 89, in resource "kubernetes_manifest" "cluster_issuer_selfsigned": │ 89: resource "kubernetes_manifest" "cluster_issuer_selfsigned" { │ │ cannot create REST client: no client config

sidh commented 2 years ago

Same issue here. Serious blocker for us. :(

DrEsteban commented 2 years ago

Still seeing this on provider version 2.10.0

edlevin6612 commented 2 years ago

I ended up moving my kubernetes_manifest resources to another Terraform project invoked after the cluster is created but definitely not ideal.

SizZiKe commented 2 years ago

how is this still an issue? Still affected.

FR-Solution commented 2 years ago

The problem is actual, a big request to fix it.

luis-guimaraes-exoawk commented 2 years ago

Still an issue, please fix this

manan commented 1 year ago

+1

chengleqi commented 1 year ago

Same here.

5imun commented 1 year ago

+1 this is significant problem

odee30 commented 1 year ago

+1 - Even occurs if I try and run a plan using -target to try to deploy the cluster first

nagidocs commented 1 year ago

Still an issue with TF Plan when cluster is not yet present!

tungavaso commented 1 year ago

same here

rpressiani commented 1 year ago

+1

Lazzu commented 1 year ago

I have this issue as well

vespian commented 1 year ago

Same here, 1.5 year and counting.

amreshh commented 1 year ago

Also running into this issue, since I have a custom resource I want to use the kubernetes_manifest resource, however according to the documentation:

This resource requires API access during planning time. This means the cluster has to be accessible at plan time and thus cannot be created in the same apply operation.
chudyandrej commented 1 year ago

+1

hguermeur commented 1 year ago

Same issue here : Error: Failed to construct REST client and cannot create REST client: no client config

caracostea commented 1 year ago

Same...

Failed to construct REST client

cannot create REST client: no client config
marcinprzybysz86 commented 1 year ago

Still an issue! cannot create AWS infra and all related in new empty account because EKS cluster does not yet exists, even though I have dependencies. Thats silly!

schoenenberg commented 1 year ago

I don't want to post another +1 here, but I do have the same issue when trying to deploy a certmanager Issuer.

How can we get the attention of the maintainers here? This issue is open for almost two years affecting many users..

MonicaMagoniCom commented 10 months ago

I'm experiencing the same issue. And also many others related to Kubernetes provider :(

luigi-bitonti commented 10 months ago

@jrhouston can you help us with this issue?

dmajano commented 9 months ago

+1

jackspirou commented 6 months ago

still an issue +1

alexsomesan commented 6 months ago

The kubernetes_manifest resource requires the cluster to be present when planning such resources. Because of this, applying the cluster and kubernetes_manifest resources in the same Terraform run is not supported at the moment.

This is documented in the "before you use" section of the resource documentation.

We are exploring solutions to this, but they require changes to Terraform itself and the underlying provider SDKs so we can't anticipate when one will become available.

The recommendation remains to split the configuration into two apply operations: a first one to create the cluster and it's infrastructure and a second one to create the Kubernetes resources.

autarchprinceps commented 3 days ago

But why does this work with non _manifest resources then? They can be created in the same apply, while setting up the provider from module outputs or the likes. If this was a fundamental issue in not being able to setup the provider from settings only know after applying resources, they would be just as broken. Obligatory "still a massive issue, please fix".