cktf / terraform-hcloud-rke

Terraform HCloud RKE Module
MIT License
4 stars 2 forks source link

Hi, get missing token error message #1

Closed AdZoAi closed 11 months ago

AdZoAi commented 2 years ago

This is my current main.tf file:

module "rke" { source = "cktf/rke/hcloud"

name = "rke2-test" network_id = "2234660" ## hand setted network in current hetzner proyect hcloud_token = "AoiVm...............................................................................ZT2raDk5dY"

masters = { 1 = { type = "cx11" location = "fsn1" tags = {} } }

node_pools = { pool1 = { type = "cx11" location = "fsn1" min_size = 3 max_size = 5 } pool2 = { type = "cx11" location = "fsn1" min_size = 2 max_size = 5 } } }

$ terraform init Initializing modules...

Initializing the backend...

Initializing provider plugins...

and finally get

jbmac$ terraform apply ╷ │ Error: Missing required argument │ │ The argument "token" is required, but was not set.

Any idea what I'm doing wrong?

Many Thanks!

mhmnemati commented 2 years ago

@AdZoAi This error is related to your hcloud provider, you need to pass your token to the terraform provider:

provider "hcloud" {
  token = "<REDACTED>"
}
AdZoAi commented 2 years ago

Thank you ! yes token at povider worked!. Know i have this other error:

mian.tf ->

module "network" { source = "cktf/network/hcloud"

name = "zpot-net" cidr = "10.0.0.0/16" subnets = { masters = { type = "server", cidr = "10.0.0.0/24" } workers = { type = "server", cidr = "10.0.1.0/24" } } }

module "rke" { source = "cktf/rke/hcloud"

name = "rke2-zpot" network_id = module.network.network_id hcloud_token = "Ao...........................................................................................................................5dY"

masters = { 1 = { type = "cx21" location = "fsn1" tags = {} } 2 = { type = "cx21" location = "fsn1" tags = {} } 3 = { type = "cx21" location = "fsn1" tags = {} } } node_pools = { pool1 = { type = "cx11" location = "fsn1" min_size = 2 max_size = 2 } } }

terraform { required_version = ">= 0.14.0" required_providers { tls = { source = "hashicorp/tls" version = ">= 3.0.0" } random = { source = "hashicorp/random" version = ">= 3.0.0" } hcloud = { source = "hetznercloud/hcloud" version = ">= 1.31.0" } k8sbootstrap = { source = "nimbolus/k8sbootstrap" version = ">= 0.1.2" } helm = { source = "hashicorp/helm" version = ">= 2.0.0" } } }

variable "hcloud_token" {}

provider "hcloud" { token = var.hcloud_token }

run terraform -> and get this error

$ terraform apply var.hcloud_token Enter a value: Aoi........................................................................................................................5dY

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

Terraform will perform the following actions:

module.network.hcloud_network.this will be created

Plan: 22 to add, 0 to change, 0 to destroy.

Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.

Enter a value: yes

module.rke.tls_private_key.this: Creating... module.rke.random_string.agent_token: Creating... module.rke.random_string.token_id: Creating... module.rke.random_string.cluster_token: Creating... module.rke.random_string.token_id: Creation complete after 0s [id=qprwgf] module.rke.random_string.token_secret: Creating... module.network.hcloud_network.this: Creating... module.rke.hcloud_load_balancer.this: Creating... module.rke.hcloud_placement_group.this: Creating... module.rke.random_string.token_secret: Creation complete after 0s [id=e395303r5xqjcq2v] module.rke.random_string.agent_token: Creation complete after 0s [id=xLxsYdPxaRimI2PLYGFn6vGv8KBVMbvdphWhoXEOz9Edm5gD] module.rke.random_string.cluster_token: Creation complete after 0s [id=9W3vUroafhaszHYYVEYaSHkHbeYvhCPMgHuc9197PjAY9UBh] module.network.hcloud_network.this: Creation complete after 1s [id=2238879] module.network.hcloud_network_subnet.this["workers"]: Creating... module.network.hcloud_network_subnet.this["masters"]: Creating... module.rke.hcloud_placement_group.this: Creation complete after 1s [id=98171] module.rke.tls_private_key.this: Creation complete after 2s [id=01b8e75cbc054e91aed970a2e491653ed082041a] module.rke.hcloud_ssh_key.this: Creating... module.rke.hcloud_ssh_key.this: Creation complete after 1s [id=9083149] module.rke.hcloud_load_balancer.this: Creation complete after 3s [id=958222] module.rke.hcloud_load_balancer_service.this: Creating... module.network.hcloud_network_subnet.this["masters"]: Creation complete after 2s [id=2238879-10.0.0.0/24] module.network.hcloud_network_subnet.this["workers"]: Creation complete after 2s [id=2238879-10.0.1.0/24] module.rke.hcloud_load_balancer_network.this: Creating... module.rke.hcloud_load_balancer_service.this: Creation complete after 1s [id=958222__6443] module.rke.hcloud_load_balancer_network.this: Creation complete after 3s [id=958222-2238879] module.rke.hcloud_load_balancer_target.this: Creating... module.rke.hcloud_server.this["2"]: Creating... module.rke.hcloud_server.this["1"]: Creating... module.rke.hcloud_server.this["3"]: Creating... module.rke.helm_release.this: Creating... module.rke.hcloud_load_balancer_target.this: Creation complete after 3s [id=lb-label-selector-tgt-fd13b5a9ab16eb66def9c705c073dc82a4f44e79005af5dc5082fc2d7bd2620e-958222] module.rke.hcloud_server.this["3"]: Creation complete after 9s [id=25917223] module.rke.hcloud_server.this["2"]: Still creating... [10s elapsed] module.rke.hcloud_server.this["1"]: Still creating... [10s elapsed] module.rke.hcloud_server.this["1"]: Creation complete after 10s [id=25917224] module.rke.hcloud_server.this["2"]: Creation complete after 11s [id=25917225] module.rke.hcloud_server_network.this["1"]: Creating... module.rke.hcloud_server_network.this["3"]: Creating... module.rke.data.k8sbootstrap_auth.this: Reading... module.rke.hcloud_server_network.this["2"]: Creating... module.rke.hcloud_firewall.this: Creating... module.rke.hcloud_server_network.this["3"]: Creation complete after 5s [id=25917223-2238879] module.rke.hcloud_server_network.this["1"]: Creation complete after 7s [id=25917224-2238879] module.rke.data.k8sbootstrap_auth.this: Still reading... [10s elapsed] module.rke.hcloud_server_network.this["2"]: Still creating... [10s elapsed] module.rke.hcloud_firewall.this: Still creating... [10s elapsed] module.rke.hcloud_server_network.this["2"]: Creation complete after 13s [id=25917225-2238879] module.rke.hcloud_firewall.this: Creation complete after 17s [id=611785] module.rke.data.k8sbootstrap_auth.this: Still reading... [20s elapsed] module.rke.data.k8sbootstrap_auth.this: Still reading... [30s elapsed] module.rke.data.k8sbootstrap_auth.this: Still reading... [40s elapsed] module.rke.data.k8sbootstrap_auth.this: Still reading... [50s elapsed] module.rke.data.k8sbootstrap_auth.this: Still reading... [1m0s elapsed] module.rke.data.k8sbootstrap_auth.this: Still reading... [1m10s elapsed] module.rke.data.k8sbootstrap_auth.this: Read complete after 1m20s [id=bootstrap-token] ╷ │ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable │ │ with module.rke.helm_release.this, │ on .terraform/modules/rke/node_pools.tf line 1, in resource "helm_release" "this": │ 1: resource "helm_release" "this" {

Again Many Thanks for any advice on this error !

mhmnemati commented 2 years ago

This problem is related to helm provider, rke module will use the helm provider to install cluster autoscaler for node pools, you need to define helm provider in your main.tf file like bellow:

provider "helm" {
  kubernetes {
    host                   = module.rke.host
    token                  = module.rke.token
    cluster_ca_certificate = module.rke.ca_crt
  }
}
AdZoAi commented 2 years ago

Hi, Mohamed Thank you very much ! 1) all went Ok with helm provider and default type "k3s" and the the cluster is created! main.tf module "network" { source = "cktf/network/hcloud"

name = "rancher-net" cidr = "10.0.0.0/16" subnets = { masters = { type = "server", cidr = "10.0.1.0/24" } workers = { type = "server", cidr = "10.0.2.0/24" } } }

module "rke" { source = "cktf/rke/hcloud"

name = "rke2-rancher" network_id = module.network.network_id hcloud_token = "Aoi.........................................................................................................................5dY"

type = "rke2"

version_ = "v1.24.8+rke2r1"

masters = { 1 = { type = "cx21" location = "fsn1" tags = {} }, 2 = { type = "cx21" location = "fsn1" tags = {} }, 3 = { type = "cx21" location = "fsn1" tags = {} } } }

terraform { required_version = ">= 0.14.0" required_providers { tls = { source = "hashicorp/tls" version = ">= 3.0.0" } random = { source = "hashicorp/random" version = ">= 3.0.0" } hcloud = { source = "hetznercloud/hcloud" version = ">= 1.31.0" } k8sbootstrap = { source = "nimbolus/k8sbootstrap" version = ">= 0.1.2" } helm = { source = "hashicorp/helm" version = ">= 2.0.0" } } }

provider "hcloud" { token = "Aoi.........................................................................................................................5dY" # var.hcloud_token }

provider "helm" { kubernetes { host = module.rke.host token = module.rke.token cluster_ca_certificate = module.rke.ca_crt } }

2) But destroying this cluster gives this error:

module.rke.helm_release.this: Still destroying... [id=cluster-autoscaler, 10s elapsed] module.rke.hcloud_server_network.this["3"]: Destruction complete after 11s module.rke.helm_release.this: Still destroying... [id=cluster-autoscaler, 20s elapsed] module.rke.helm_release.this: Still destroying... [id=cluster-autoscaler, 30s elapsed] module.rke.helm_release.this: Still destroying... [id=cluster-autoscaler, 40s elapsed] ╷ │ Error: uninstallation completed with 2 error(s): could not get apiVersions from Kubernetes: could not get server version from Kubernetes: Get "https://95.217.168.125:6443/version?timeout=32s": net/http: request canceled (Client.Timeout exceeded while awaiting headers); uninstall: Failed to purge the release: get: failed to get "sh.helm.release.v1.cluster-autoscaler.v1": Get "https://95.217.168.125:6443/api/v1/namespaces/kube-system/secrets/sh.helm.release.v1.cluster-autoscaler.v1": http2: client connection lost

3) On the exact same main.tf but with type = "rke2" and with version_ = "v1.24.8+rke2r1" or version not inlcuded (default) it fails in both cases :

module.rke.data.k8sbootstrap_auth.this: Still reading... [4m30s elapsed] module.rke.data.k8sbootstrap_auth.this: Still reading... [4m40s elapsed] module.rke.data.k8sbootstrap_auth.this: Still reading... [4m50s elapsed] module.rke.data.k8sbootstrap_auth.this: Still reading... [5m0s elapsed] ╷ │ Error: context deadline exceeded │ │ with module.rke.data.k8sbootstrap_auth.this, │ on .terraform/modules/rke/token.tf line 23, in data "k8sbootstrap_auth" "this": │ 23: data "k8sbootstrap_auth" "this" { │ ╵ 192:v3 jbmac$

Do you have any working example you could provide, creating an rke2 cluster with a fix version? That could be very helpfull.

Many thanks Again! Jorge

jbkarle commented 2 years ago

Hello, I have the same issue trying to deploy and rke2 cluster

mhmnemati commented 2 years ago

@AdZoAi , @jbkarle I've tested the rke2 using this sample code, and cluster was created successfully, after that I tried to destroy it, and it was successfully destroyed:

terraform {
  required_version = ">= 0.14.0"
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = "1.35.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.15.0"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "1.14.0"
    }
    helm = {
      source  = "hashicorp/helm"
      version = "2.7.1"
    }
  }
}

provider "hcloud" {}

provider "kubernetes" {
  host                   = module.rke.host
  token                  = module.rke.token
  cluster_ca_certificate = module.rke.ca_crt
}

provider "kubectl" {
  host                   = module.rke.host
  token                  = module.rke.token
  cluster_ca_certificate = module.rke.ca_crt
  load_config_file       = false
}

provider "helm" {
  kubernetes {
    host                   = module.rke.host
    token                  = module.rke.token
    cluster_ca_certificate = module.rke.ca_crt
  }
}

module "network" {
  source  = "cktf/network/hcloud"
  version = "1.5.1"

  name = "testing"
  cidr = "192.168.1.0/24"
  subnets = {
    nodes = {
      type = "cloud",
      cidr = "192.168.1.0/24"
    }
  }
}

module "rke" {
  source  = "cktf/rke/hcloud"
  version = "1.10.2"

  type         = "rke2"
  name         = "testing"
  version_     = "v1.24.8+rke2r1"
  network_id   = module.network.network_id
  hcloud_token = var.hcloud_token

  masters = {
    1 = {
      type     = "cx21"
      location = "hel1"
      tags     = {}
    }
  }

  node_pools = {
    pool1 = {
      type     = "cx41"
      location = "hel1"
      min_size = 1
      max_size = 5
    }
  }
}
donydonald1 commented 2 years ago

Hello, I guess others are failing to clarify the issue. The below error will persist when trying to create an HA rke2 cluster, say three master servers; │ Error: context deadline exceeded │ │ with module.rke.data.k8sbootstrap_auth.this, │ on .terraform/modules/rke/token.tf line 23, in data "k8sbootstrap_auth" "this": │ 23: data "k8sbootstrap_auth" "this" { │

And when destroying the cluster, the below errors occurs/fail; │ Error: uninstallation completed with 2 error(s): unable to build kubernetes objects for delete: [resource mapping not found for name: "cluster-autoscaler-hetzner-cluster-autoscaler" namespace: "" from "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1" │ ensure CRDs are installed first, unable to recognize "": Get "https://5.161.160.126:6443/api?timeout=32s": http2: client connection lost, unable to recognize "": Get "https://5.161.160.126:6443/api?timeout=32s": dial tcp 5.161.160.126:6443: connect: connection refused]; uninstall: Failed to purge the release: get: failed to get "sh.helm.release.v1.cluster-autoscaler.v1": Get "https://5.161.160.126:6443/api/v1/namespaces/kube-system/secrets/sh.helm.release.v1.cluster-autoscaler.v1": dial tcp 5.161.160.126:6443: connect: connection refused

Apparently, no worker nodes come up even deploying a test application. Everything goes on masters.

AdZoAi commented 2 years ago

Yes, the error apear because of setting HA deployment and not because rke2 or k3s. Also to note, is that with the test deployment (ckoliber), I still have the same error when destroying and also no worker node comes up (even not HA).

mhmnemati commented 1 year ago

@donydonald1 @AdZoAi Thanks for your clarification, these days I'm so busy I will check the problem and fix this issue on next release

jbkarle commented 1 year ago

👍

El El jue, 1 de dic. de 2022 a la(s) 17:50, KoLiBer < @.***> escribió:

@donydonald1 https://github.com/donydonald1 @AdZoAi https://github.com/AdZoAi Thanks for your clarification, these days I'm so busy I will check the problem and fix this issue on next release

— Reply to this email directly, view it on GitHub https://github.com/cktf/terraform-hcloud-rke/issues/1#issuecomment-1334428202, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEWJJBF4HFO2ILVUKDNDHE3WLEFRPANCNFSM6AAAAAASJANOFI . You are receiving this because you were mentioned.Message ID: @.***>

donydonald1 commented 1 year ago

Hi... Please any update on this for HA RKE2?. @ckoliber

mhmnemati commented 11 months ago

This problem is fixed in 1.11.0
In this release, the module will use terraform-module-rke to setup clusters