hashicorp / terraform-provider-kubernetes

Terraform Kubernetes provider
https://www.terraform.io/docs/providers/kubernetes/
Mozilla Public License 2.0
1.6k stars 979 forks source link

Failed to get RESTMapper client │ │ cannot create discovery client: │ no client config #2598

Closed maniimanjari closed 1 month ago

maniimanjari commented 1 month ago

Terraform Version, Provider Version and Kubernetes Version

Terraform v1.9.7 on windows_amd64

Affected Resource(s)

Terraform Configuration Files

provider "helm" { kubernetes { host = "https://${module.gkecluster.endpoint}" token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(module.gkecluster.ca_certificate) } }

data "google_client_config" "default" {}

resource "helm_release" "argocd" { name = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" namespace = "argocd" create_namespace = true

version = "3.35.4" values = [file("../../k8s-manifests/argocd/values.yaml")] }

This is my file to deploy ArgoCD and I even have other terraform resources/modules to create vpc, cluster, etc. Earlier when I use to run terraform plan or apply it did not throw error , it worked fine. But today when I run terraform plan/ apply on complete configuration, **Planning failed. Terraform encountered an error while generating this plan.

╷ │ Error: Failed to get RESTMapper client │ │ cannot create discovery client: │ no client config**

If I target single resource like terraform plan -target resource.helm_release.argocd or terraform plan -target module.gkecluster it does not throw such error.

arybolovlev commented 1 month ago

Hi @maniimanjari,

It doesn't look like an issue with the provider but the way you initialize it with credentials as the error message says. I would first recommend to troubleshoot this part.

maniimanjari commented 1 month ago

I am not able to fix this issue, could you please guide me here. I ran
gcloud config set project YOUR_PROJECT_ID gcloud auth login gcloud auth application-default login and even connected to the cluster, context is also pointing to the same cluster but still see the error.