hashicorp / terraform-provider-kubernetes-alpha

A Terraform provider for Kubernetes that uses dynamic resource types and server-side apply. Supports all Kubernetes resources.
https://registry.terraform.io/providers/hashicorp/kubernetes-alpha/latest
Mozilla Public License 2.0
493 stars 63 forks source link

Get an error when using dynamic credentials #129

Open lperrin-obs opened 3 years ago

lperrin-obs commented 3 years ago

Terraform Version and Provider Version

Terraform v0.13.5

Kubernetes Version

1.19.2

Affected Resource(s)

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this. -->

Terraform Configuration Files

resource "scaleway_k8s_cluster_beta" "chewsk8s" {
  name = "chewsk8s"
  version = "1.19.2"
  cni = "calico"
  ingress = "nginx"
}
resource "scaleway_k8s_pool_beta" "chewsk8s_pool" {
  cluster_id = scaleway_k8s_cluster_beta.chewsk8s.id
  name = "chewsk8s_pool"
  node_type = "DEV1-M"
  size = 1
  wait_for_pool_ready = true
}
resource "null_resource" "kubeconfig" {
    depends_on = [scaleway_k8s_pool_beta.chewsk8s_pool]
    triggers = {
         host = scaleway_k8s_cluster_beta.chewsk8s.kubeconfig[0].host
         token = scaleway_k8s_cluster_beta.chewsk8s.kubeconfig[0].token
         cluster_ca_certificate = scaleway_k8s_cluster_beta.chewsk8s.kubeconfig[0].cluster_ca_certificate
    }
}

provider "kubernetes-alpha" {
  host             = null_resource.kubeconfig.triggers.host
  token            = null_resource.kubeconfig.triggers.token
  cluster_ca_certificate = base64decode(
     null_resource.kubeconfig.triggers.cluster_ca_certificate
  )
}

resource "kubernetes_manifest" "cluster-issuer" {
  provider = kubernetes-alpha

  manifest = {
      apiVersion = "cert-manager.io/v1"
      kind       = "ClusterIssuer"
      metadata   = {
          name = "letsencrypt-prod"
      }
      spec = {
          acme = {
              email = #############"
              server = "https://acme-v02.api.letsencrypt.org/directory"
              privateKeySecretRef = {
                name = "letsencrypt-prod"
              }
              solvers = [{
                http01 = {
                  ingress = {
                    class = "nginx"
                  }
                }
              }]
          }
      }
  }
}

Debug Output

https://gist.github.com/lperrin-obs/e42a62c29e37f3c37483d41dc54625ed

Expected Behavior

The K8S manifest will be applied.

Actual Behavior

I get an error Error: rpc error: code = Unknown desc = no client configuration when running the plan and the K8S cluster is not yet created.

Steps to Reproduce

  1. terraform plan -->

References

Community Note

marcellodesales commented 3 years ago

Getting into the same bug...

Setup

provider "kubernetes-alpha" {
  server_side_planning = true
}

resource "kubernetes_manifest" "cert_manager_cluster_issuer_prd" {
  provider = kubernetes-alpha

  manifest = {
    apiVersion = "cert-manager.io/v1"
    kind       = "ClusterIssuer"
    metadata = {
      name = "letsencrypt-prd"
    }
    spec = {
      acme = {
        # https://letsencrypt.org/docs/acme-protocol-updates/
        server = "https://acme-v02.api.letsencrypt.org/directory"

        # Email for the cert contact
        email = "contact@${var.domain}"

        # Name of a secret used to store the ACME account private key
        privateKeySecretRef = {
          name = "${var.domain}-private-key-secret"
        }

        # Zone resolvers by Route53 DNS01 challenges
        solvers = [{
          selector = {
            dnsZones = [var.domain]
          }
          dns01 = {
            route53 = {
              region = var.aws_region
              # https://stackoverflow.com/questions/63402926/fetch-zone-id-of-hosted-domain-on-route53-using-terraform/63403290#63403290
              hostedZoneID = data.aws_route53_zone.domain_hosted_zone.zone_id
            }
          }
        }]
      }
    }
  }
}

Logs

2020-10-27T02:37:44.385Z [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/kubernetes-alpha/0.2.1/linux_amd64/terraform-provider-kubernetes-alpha_v0.2.1_x5 pid=257
2020-10-27T02:37:44.385Z [DEBUG] plugin: plugin exited
Error: rpc error: code = Unknown desc = no client configuration
Screen Shot 2020-10-26 at 11 44 20 PM
alexsomesan commented 3 years ago

With this provider you cannot supply credentials from a resource that is created in the same apply operation.

Is that what your example is doing?

lperrin-obs commented 3 years ago

Yes, the cluster is created in the same apply.

According to this issue GH-82, it's not possible now ?

alexsomesan commented 3 years ago

In order for things to improve in these situations where you create the cluster in the same apply, some changes are required in Terraform itself.

The issue is tracked upstream here: https://github.com/hashicorp/terraform/issues/4149

lperrin-obs commented 3 years ago

Ok, but I'm using resources from Kubernetes provider in the same way for creating namespaces and it's working.

provider "kubernetes" {
  load_config_file = "false"

  host             = null_resource.kubeconfig.triggers.host
  token            = null_resource.kubeconfig.triggers.token
  cluster_ca_certificate = base64decode(
     null_resource.kubeconfig.triggers.cluster_ca_certificate
  )
}

resource "kubernetes_namespace" "cert-manager" {
  metadata {
    name = "cert-manager"
  }
}
marcellodesales commented 3 years ago

Hey @lperrin-obs, solved my problem in a different way: using the data from the eks module itself...

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}
provider "kubernetes" {
  version = ">= 1.13.2"

  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}
terraform {
  required_version = ">= 0.12"

  backend "s3" {}

  required_providers {
    aws    = ">= 3.0, < 4.0"
    random = "~> 3.0.0"
    k8s = {
      version = "0.8.2"
      source  = "banzaicloud/k8s"
    }
  }
}

# Configured the provider with the credentials from the eks module
provider "k8s" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}
ecerulm commented 3 years ago

@marcellodesales , but the problem is not with provider "kubernetes" but with provider "kubernetes-alpha". Using your approach still gives Error: rpc error: code = Unknown desc = no client configuration

data "aws_eks_cluster" "main" {
  name = module.terraform-aws-modules-eks.cluster_id
}

data "aws_eks_cluster_auth" "main" {
  name = module.terraform-aws-modules-eks.cluster_id
}

provider "kubernetes" {
  load_config_file = false
  host = data.aws_eks_cluster.main.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
  token = data.aws_eks_cluster_auth.main.token
  version = "~> 1.9"
}

provider "kubernetes-alpha" {
  host = data.aws_eks_cluster.main.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.main.certificate_authority[0].data)
  token = data.aws_eks_cluster_auth.main.token
  server_side_planning = true
}

module "terraform-aws-modules-eks" {
  source = "terraform-aws-modules/eks/aws"

It appears that until Terraform support "partial apply via #4149" this cannot be really solved without splitting the terraform configuration in

Since the #4149 was opened in 2015 I believe it's very unlikely that we will get the partial apply functionality anytime soon.

marcellodesales commented 3 years ago

@ecerulm Yeah I ended up using the k8s provider to do apply using this approach because I couldn't find a way as well... Sorry about that... However, I'm leaning towards the GitOps approach:

That way we have a pipeline... For instance, creating AWS Certs for ALBs would need the ARN for the declared ALBs... So fat that's where I'm going...

stevehipwell commented 3 years ago

This is a significant blocker to automating a K8s cluster as there is always manifest changes required with provisioning. Is there a reason why if the provider credentials are empty the plan can't specify undefined for the resources?