Open okgolove opened 3 years ago
Hi. Same issue
It doesn't work with depends_on
either.
started running into the following error which I think is related on destroy, didn't work with tostring() either:
│ Error: Provider configuration: failed to assert type of element in 'args' value
│
│ with module.services_tools.provider["registry.terraform.io/hashicorp/kubernetes"],
│ on ../../modules/services_tools/versions.tf line 23, in provider "kubernetes":
│ 23: provider "kubernetes" {
// this is required in order to pass information to the underlying kube provider for the above eks see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1280
provider "kubernetes" {
experiments {
manifest_resource = true
}
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
command = "aws"
}
}
Same error when using GCP and applying multiple manifests from the same file -- │ Error: Failed to construct REST client
:
data "google_client_config" "current" {}
data "google_container_cluster" "cluster" {
name = var.cluster_name
location = var.cluster_location
}
provider "kubernetes" {
host = data.google_container_cluster.cluster.endpoint
client_certificate = base64decode(data.google_container_cluster.cluster.master_auth.0.client_certificate)
client_key = base64decode(data.google_container_cluster.cluster.master_auth.0.client_key)
cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
experiments {
manifest_resource = true
}
}
resource "kubernetes_manifest" "default" {
# Create a map { "kind--name" => yaml_doc } from the multi-document yaml text.
# Each element is a separate kubernetes resource.
# Must use \n---\n to avoid splitting on strings and comments containing "---".
# YAML allows "---" to be the first and last line of a file, so make sure
# raw yaml begins and ends with a newline.
# The "---" can be followed by spaces, so need to remove those too.
# Skip blocks that are empty or comments-only in case yaml began with a comment before "---".
for_each = {
for value in [
for yaml in split(
"\n---\n",
"\n${replace(file("manifests.yaml"), "/(?m)^---[[:blank:]]+$/", "---")}\n"
) :
yamldecode(yaml)
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${value["kind"]}--${value["metadata"]["name"]}" => value
}
manifest = each.value
}
When using kubernetes provider v2.6.1 and terraform v1.x.x, the error shown is the following:
Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"],
on provider.tf line 24, in provider "kubernetes":
24: provider "kubernetes" {
'host' is not a valid URL
The error:
'host' is not a valid URL
is likely because:
host = data.google_container_cluster.this.endpoint
should have been (as per #1468):
host = "https://${data.google_container_cluster.this.endpoint}"
but:
cannot create REST client: no client config
is happening for me despite host
being a URL, and I'm not sure where to look next to diagnose.
Edit:
Seen in logs (TF_LOG=TRACE terraform apply
):
2021-11-01T17:16:22.257+1100 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: 2021-11-01T17:16:22.256+1100 [ERROR] [Configure]: Failed to load config:="&{0xc001212820 0xc0007e6fc0 <nil> 0xc000176c00 {0 0} 0xc001211f30}"
so it looks like this code path is being taken. I noted the comment:
// this is a terrible fix for if the configuration is a calculated value
so perhaps clientConfig
is expected to be populated elsewhere, later on...
This may have been evident from the issue title, but those looking for a workaround can remove dynamic/data values from the provider configuration.
E.g., given a suitably configured kubectl
environment, replacing:
provider "kubernetes" {
host = "https://${data.google_container_cluster.default.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.default.master_auth.0.cluster_ca_certificate)
}
with:
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "gke_my-project_my-region_my-cluster"
}
Getting Failed to construct REST client when I try to deploy argocd app on non-existent
EKS cluster.
But it works fine on running EKS cluster.
│ Error: Failed to construct REST client
│
│ with module.argocd_application_gitops.kubernetes_manifest.argo_application,
│ on .terraform/modules/argocd_application_gitops/main.tf line 1, in resource "kubernetes_manifest" "argo_application":
│ 1: resource "kubernetes_manifest" "argo_application" {
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
}
module "eks" {
...
}
module "argocd_application_gitops" {
depends_on = [module.vpc, module.eks, module.eks_services]
source = "project-octal/argocd-application/kubernetes"
version = "2.0.0"
argocd_namespace = var.argocd_k8s_namespace
destination_server = "https://kubernetes.default.svc"
project = var.argocd_project_name
name = "gitops"
namespace = "myns"
repo_url = var.argocd_root_gitops_url
path = "Chart"
chart = ""
target_revision = "master"
automated_self_heal = true
automated_prune = true
}
Apparently, the helm
provider (when configured in the same way) does not have this issue. So I can have the helm resources described in TF when the cluster does not exist. But I can't have the k8s manifest TF code in the project until the cluster is created.
It would be great to see the issue with Failed to construct REST client
for the Kubernetes provider solved soon! 🤞
Same problem with cert-manager:
Error: Failed to construct REST client │ │ with module.eks_cluster_first.module.cert_manager.kubernetes_manifest.cluster_issuer_selfsigned, │ on modules\cert_manager\cert_manager.tf line 89, in resource "kubernetes_manifest" "cluster_issuer_selfsigned": │ 89: resource "kubernetes_manifest" "cluster_issuer_selfsigned" { │ │ cannot create REST client: no client config
Same issue here. Serious blocker for us. :(
Still seeing this on provider version 2.10.0
I ended up moving my kubernetes_manifest
resources to another Terraform project invoked after the cluster is created but definitely not ideal.
how is this still an issue? Still affected.
The problem is actual, a big request to fix it.
Still an issue, please fix this
+1
Same here.
+1 this is significant problem
+1 - Even occurs if I try and run a plan using -target to try to deploy the cluster first
Still an issue with TF Plan when cluster is not yet present!
same here
+1
I have this issue as well
Same here, 1.5 year and counting.
Also running into this issue, since I have a custom resource I want to use the kubernetes_manifest resource, however according to the documentation:
This resource requires API access during planning time. This means the cluster has to be accessible at plan time and thus cannot be created in the same apply operation.
+1
Same issue here :
Error: Failed to construct REST client
and
cannot create REST client: no client config
Same...
Failed to construct REST client
cannot create REST client: no client config
Still an issue! cannot create AWS infra and all related in new empty account because EKS cluster does not yet exists, even though I have dependencies. Thats silly!
I don't want to post another +1
here, but I do have the same issue when trying to deploy a certmanager Issuer
.
How can we get the attention of the maintainers here? This issue is open for almost two years affecting many users..
I'm experiencing the same issue. And also many others related to Kubernetes provider :(
@jrhouston can you help us with this issue?
+1
still an issue +1
The kubernetes_manifest
resource requires the cluster to be present when planning such resources. Because of this, applying the cluster and kubernetes_manifest
resources in the same Terraform run is not supported at the moment.
This is documented in the "before you use" section of the resource documentation.
We are exploring solutions to this, but they require changes to Terraform itself and the underlying provider SDKs so we can't anticipate when one will become available.
The recommendation remains to split the configuration into two apply operations: a first one to create the cluster and it's infrastructure and a second one to create the Kubernetes resources.
But why does this work with non _manifest resources then? They can be created in the same apply, while setting up the provider from module outputs or the likes. If this was a fundamental issue in not being able to setup the provider from settings only know after applying resources, they would be just as broken. Obligatory "still a massive issue, please fix".
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
Debug log contains lots of private information. I'd prefer to not to post it.
Steps to Reproduce
terraform apply
Expected Behavior
Plan is presented, after apply CRD is created successfully
Actual Behavior
Error:
Important Factoids
Community Note