hashicorp / learn-terraform-deploy-nginx-kubernetes-provider

Deploy and expose a NGINX service using the Terraform Kubernetes Provider
https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/kubernetes
Mozilla Public License 2.0
38 stars 39 forks source link

Error: Failed to configure client: tls: failed to find any PEM data in certificate input #3

Open vodelerk opened 3 years ago

vodelerk commented 3 years ago

Hi, I'm following this tutorial: https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider?in=terraform/kubernetes

working from windows 10 with kind, and after I try terraform apply

I'm getting this error: kubernetes_deployment.nginx: Creating... ╷ │ Error: Failed to configure client: tls: failed to find any PEM data in certificate input │ │ with kubernetes_deployment.nginx, │ on kubernetes.tf line 32, in resource "kubernetes_deployment" "nginx": │ 32: resource "kubernetes_deployment" "nginx" { │ ╵

I have already checked the box on Docker desktop to "Expose daemon on tcp://localhost:2375 without TLS"

any advice?

Thanks,

im2nguyen commented 3 years ago

Hey @vodelerk, I think you incorrectly set your keys to authenticate your k8s provider. Can you revalidate?

vodelerk commented 3 years ago

according to the tutorial:

Define the variables in a terraform.tfvars file.

host corresponds with clusters.cluster.server. client_certificate corresponds with users.user.client-certificate. client_key corresponds with users.user.client-key. cluster_ca_certificate corresponds with clusters.cluster.certificate-authority-data.

this is the setup

image

maybe, I might be sleepy at this point, but I double-check, and looks like it's correct.

is there something I am not seeing?

im2nguyen commented 3 years ago

Oh, those values are base64 encoded. Can you try this? It should work and we'll update the guide soon. Thanks for reporting @vodelerk! :smile:

provider "kubernetes" {
  host = var.host

  client_certificate     = base64decode(var.client_certificate)
  client_key             = base64decode(var.client_key)
  cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
}
vodelerk commented 3 years ago

now the terraform apply gets stuck waiting for the replicas to be ready,

if I check on the docker desktop logs for "terraform-learn-control-plane", I'm getting:


[ OK ] Reached target Sockets.

Failed to attach 171 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/dev-hugepages.mount: No such file or directory

Mounting Huge Pages File System...

Failed to attach 171 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/dev-hugepages.mount: No such file or directory

Failed to attach 172 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/sys-kernel-debug.mount: No such file or directory

Mounting Kernel Debug File System...

Failed to attach 172 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/sys-kernel-debug.mount: No such file or directory

Failed to attach 173 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/sys-kernel-tracing.mount: No such file or directory

Mounting Kernel Trace File System...

Failed to attach 173 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/sys-kernel-tracing.mount: No such file or directory

Starting Journal Service...

Failed to attach 175 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/sys-fs-fuse-connections.mount: No such file or directory

Mounting FUSE Control File System...

Failed to attach 175 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/sys-fs-fuse-connections.mount: No such file or directory

Failed to attach 176 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-remount-fs.service: No such file or directory

Starting Remount Root and Kernel File Systems...

Failed to attach 176 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-remount-fs.service: No such file or directory

Failed to attach 177 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-sysctl.service: No such file or directory

Starting Apply Kernel Variables...

Failed to attach 177 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-sysctl.service: No such file or directory

[ OK ] Mounted Huge Pages File System.

[ OK ] Mounted Kernel Debug File System.

[ OK ] Mounted Kernel Trace File System.

[ OK ] Mounted FUSE Control File System.

[ OK ] Finished Remount Root and Kernel File Systems.

Failed to attach 178 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-sysusers.service: No such file or directory

Starting Create System Users...

Failed to attach 178 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-sysusers.service: No such file or directory

Failed to attach 179 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-update-utmp.service: No such file or directory

Starting Update UTMP about System Boot/Shutdown...

Failed to attach 179 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-update-utmp.service: No such file or directory

[ OK ] Finished Apply Kernel Variables.

[ OK ] Finished Create System Users.

Failed to attach 180 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-tmpfiles-setup-dev.service: No such file or directory

Starting Create Static Device Nodes in /dev...

Failed to attach 180 to compat systemd cgroup /docker/9baf4bba5b3aa9a8b867f25fd28c44625a4b2a5fbacac52bc803fb8da164ed82/system.slice/systemd-tmpfiles-setup-dev.service: No such file or directory

Detected virtualization docker.

Detected architecture x86-64.

Failed to create symlink /sys/fs/cgroup/net_cls: File exists

Failed to create symlink /sys/fs/cgroup/net_prio: File exists

Failed to create symlink /sys/fs/cgroup/cpu: File exists

Failed to create symlink /sys/fs/cgroup/cpuacct: File exists

Welcome to Ubuntu 20.10!

I tried it with the tls client option checked and unchecked on docker desktop,

the error I got from terraform:

kubernetes_deployment.nginx: Still creating... [9m50s elapsed] ╷ │ Error: Waiting for rollout to finish: 2 replicas wanted; 0 replicas Ready │ │ with kubernetes_deployment.nginx, │ on kubernetes.tf line 32, in resource "kubernetes_deployment" "nginx": │ 32: resource "kubernetes_deployment" "nginx" { │ ╵

any advice?

im2nguyen commented 3 years ago

Hey @vodelerk, I tried on my Windows machine and wasn't able to reproduce this. Can you try reinstalling or upgrading your current version of Docker?

vodelerk commented 3 years ago

let me get that testing done in a new environment, and ill get back to you with the results!

vodelerk commented 3 years ago

I'm working from a clean environment, but I'm still getting the terraform error, this time I'm working with the debugging feature export TF_LOG=trace and I'm getting this:

---[ REQUEST ]--------------------------------------- GET /apis/apps/v1/namespaces/default/deployments/scalable-nginx-example HTTP/1.1 Host: 127.0.0.1:53117 User-Agent: HashiCorp/1.0 Terraform/0.15.4 Accept: application/json, / Accept-Encoding: gzip

-----------------------------------------------------: timestamp=2021-06-03T04:08:32.477-0500 2021-06-03T04:08:32.623-0500 [TRACE] dag/walk: vertex "root" is waiting for "meta.count-boundary (EachMode fixup)" 2021-06-03T04:08:32.623-0500 [TRACE] dag/walk: vertex "provider[\"registry.terraform.io/hashicorp/kubernetes\"] (close)" is waiting for "kubernetes_deployment.nginx" 2021-06-03T04:08:32.623-0500 [TRACE] dag/walk: vertex "meta.count-boundary (EachMode fixup)" is waiting for "kubernetes_deployment.nginx" 2021-06-03T04:08:32.719-0500 [INFO] provider.terraform-provider-kubernetes_v2.3.0_x5.exe: 2021/06/03 04:08:32 [DEBUG] Kubernetes API Response Details: ---[ RESPONSE ]-------------------------------------- HTTP/2.0 200 OK Content-Length: 3481 Cache-Control: no-cache, private Content-Type: application/json Date: Thu, 03 Jun 2021 09:08:32 GMT X-Kubernetes-Pf-Flowschema-Uid: 6a645b83-444d-4ebe-bfeb-7c3d4dc9928b X-Kubernetes-Pf-Prioritylevel-Uid: c4e564ae-a4db-4057-a75a-a9caf066140a

{ "kind": "Deployment", "apiVersion": "apps/v1", "metadata": { "name": "scalable-nginx-example", "namespace": "default", "uid": "94df581d-4dfc-4ad3-bbd5-c3a62c058e52", "resourceVersion": "3442", "generation": 1, "creationTimestamp": "2021-06-03T08:58:33Z", "labels": { "App": "ScalableNginxExample" }, "annotations": { "deployment.kubernetes.io/revision": "1" }, "managedFields": [ { "manager": "HashiCorp", "operation": "Update", "apiVersion": "apps/v1", "time": "2021-06-03T08:58:33Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:labels": { ".": {}, "f:App": {} } }, "f:spec": { "f:progressDeadlineSeconds": {}, "f:replicas": {}, "f:revisionHistoryLimit": {}, "f:selector": {}, "f:strategy": { "f:rollingUpdate": { ".": {}, "f:maxSurge": {}, "f:maxUnavailable": {} }, "f:type": {} }, "f:template": { "f:metadata": { "f:labels": { ".": {}, "f:App": {} } }, "f:spec": { "f:automountServiceAccountToken": {}, "f:containers": { "k:{\"name\":\"example\"}": { ".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:ports": { ".": {}, "k:{\"containerPort\":80,\"protocol\":\"TCP\"}": { ".": {}, "f:containerPort": {}, "f:protocol": {} } }, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:cpu": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {} } }, "f:dnsPolicy": {}, "f:enableServiceLinks": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:shareProcessNamespace": {}, "f:terminationGracePeriodSeconds": {} } } } } }, { "manager": "kube-controller-manager", "operation": "Update", "apiVersion": "apps/v1", "time": "2021-06-03T09:00:03Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:deployment.kubernetes.io/revision": {} } }, "f:status": { "f:conditions": { ".": {}, "k:{\"type\":\"Available\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} }, "k:{\"type\":\"Progressing\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} } }, "f:observedGeneration": {}, "f:replicas": {}, "f:unavailableReplicas": {}, "f:updatedReplicas": {} } } } ] }, "spec": { "replicas": 2, "selector": { "matchLabels": { "App": "ScalableNginxExample" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "App": "ScalableNginxExample" } }, "spec": { "containers": [ { "name": "example", "image": "nginx:1.7.8", "ports": [ { "containerPort": 80, "protocol": "TCP" } ], "resources": { "limits": { "cpu": "500m", "memory": "512Mi" }, "requests": { "cpu": "250m", "memory": "50Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "automountServiceAccountToken": true, "shareProcessNamespace": false, "securityContext": {}, "schedulerName": "default-scheduler", "enableServiceLinks": true } }, "strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": "25%", "maxSurge": "25%" } }, "revisionHistoryLimit": 10, "progressDeadlineSeconds": 600 }, "status": { "observedGeneration": 1, "replicas": 2, "updatedReplicas": 2, "unavailableReplicas": 2, "conditions": [ { "type": "Available", "status": "False", "lastUpdateTime": "2021-06-03T08:58:33Z", "lastTransitionTime": "2021-06-03T08:58:33Z", "reason": "MinimumReplicasUnavailable", "message": "Deployment does not have minimum availability." }, { "type": "Progressing", "status": "True", "lastUpdateTime": "2021-06-03T09:00:02Z", "lastTransitionTime": "2021-06-03T08:58:33Z", "reason": "ReplicaSetUpdated", "message": "ReplicaSet \"scalable-nginx-example-5fbb9989bf\" is progressing." } ] } }

-----------------------------------------------------: timestamp=2021-06-03T04:08:32.718-0500 2021-06-03T04:08:32.720-0500 [INFO] provider.terraform-provider-kubernetes_v2.3.0_x5.exe: 2021/06/03 04:08:32 [TRACE] Waiting 10s before next try: timestamp=2021-06-03T04:08:32.719-0500 2021-06-03T04:08:33.022-0500 [TRACE] maybeTainted: kubernetes_deployment.nginx encountered an error during creation, so it is now marked as tainted 2021-06-03T04:08:33.022-0500 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for kubernetes_deployment.nginx 2021-06-03T04:08:33.022-0500 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for kubernetes_deployment.nginx 2021-06-03T04:08:33.025-0500 [TRACE] evalApplyProvisioners: kubernetes_deployment.nginx is tainted, so skipping provisioning 2021-06-03T04:08:33.025-0500 [TRACE] maybeTainted: kubernetes_deployment.nginx was already tainted, so nothing to do 2021-06-03T04:08:33.025-0500 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for kubernetes_deployment.nginx 2021-06-03T04:08:33.025-0500 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for kubernetes_deployment.nginx 2021-06-03T04:08:33.033-0500 [TRACE] statemgr.Filesystem: have already backed up original terraform.tfstate to terraform.tfstate.backup on a previous write 2021-06-03T04:08:33.035-0500 [TRACE] statemgr.Filesystem: no state changes since last snapshot 2021-06-03T04:08:33.035-0500 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate 2021-06-03T04:08:33.038-0500 [TRACE] vertex "kubernetes_deployment.nginx": visit complete 2021-06-03T04:08:33.038-0500 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/hashicorp/kubernetes\"] (close)" errored, so skipping 2021-06-03T04:08:33.038-0500 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping 2021-06-03T04:08:33.038-0500 [TRACE] dag/walk: upstream of "root" errored, so skipping 2021-06-03T04:08:33.039-0500 [TRACE] statemgr.Filesystem: have already backed up original terraform.tfstate to terraform.tfstate.backup on a previous write 2021-06-03T04:08:33.040-0500 [TRACE] statemgr.Filesystem: no state changes since last snapshot 2021-06-03T04:08:33.040-0500 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate ╷ │ Error: Waiting for rollout to finish: 2 replicas wanted; 0 replicas Ready │ │ with kubernetes_deployment.nginx, │ on kubernetes.tf line 32, in resource "kubernetes_deployment" "nginx": │ 32: resource "kubernetes_deployment" "nginx" { │ ╵ 2021-06-03T04:08:33.047-0500 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info 2021-06-03T04:08:33.049-0500 [TRACE] statemgr.Filesystem: unlocked by closing terraform.tfstate 2021-06-03T04:08:33.061-0500 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing" 2021-06-03T04:08:33.097-0500 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/kubernetes/2.3.0/windows_amd64/terraform-provider-kubernetes_v2.3.0_x5.exe pid=18064 2021-06-03T04:08:33.098-0500 [DEBUG] provider: plugin exited

any advice?

im2nguyen commented 3 years ago

Hey @vodelerk, I think this is an error with kind and Docker instead of Terraform.

Can you try running: kubectl get deployments? If it returns the deployments, we can isolate the issue

vodelerk commented 3 years ago

im getting this: $ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE scalable-nginx-example 0/2 2 0 8h

image

I installed kind from chocolatey,

should be 2/2 in ready according to the tutorial right?

im2nguyen commented 3 years ago

Yeah, this should be 2/2. I think there's something wrong with your kind and/or Docker instance

vodelerk commented 3 years ago

how did you get kind installed? what version of windows are you using? which updates are applied to your windows?