Closed KoffeinKaio closed 8 months ago
Hi @KoffeinKaio , sorry for the late reply, I couldn't reproduce the behavior described here. What version of terraform/providers do you use?
Hey,
#terraform -v
Terraform v1.7.3
on linux_amd64
I can still reproduce this, minimal example:
terraform {
required_providers {
metakube = {
source = "syseleven/metakube"
version = ">= 5.2.1"
}
openstack = {
source = "terraform-provider-openstack/openstack"
version = ">= 1.54.1"
}
}
}
provider "metakube" {
host = "https://metakube.syseleven.de"
token_path = "${path.module}/syseleven_token"
}
resource "openstack_identity_application_credential_v3" "app_credential" {
name = "cluster-${var.cluster.name}"
description = "app credentials for cluster ${var.cluster.name}"
}
resource "metakube_cluster" "cluster01" {
name = "${var.cluster.name}"
dc_name = "syseleven-dbl1"
project_id = "<censored>"
spec {
version = "1.28"
cloud {
openstack {
application_credentials {
id = openstack_identity_application_credential_v3.app_credential.id
secret = openstack_identity_application_credential_v3.app_credential.secret
}
}
}
}
}
#terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# metakube_cluster.cluster01 will be created
+ resource "metakube_cluster" "cluster01" {
+ creation_timestamp = (known after apply)
+ dc_name = "syseleven-dbl1"
+ deletion_timestamp = (known after apply)
+ id = (known after apply)
+ kube_config = (known after apply)
+ kube_login_kube_config = (known after apply)
+ name = "testcluster"
+ oidc_kube_config = (known after apply)
+ project_id = ""
+ spec {
+ audit_logging = false
+ enable_ssh_agent = true
+ pod_node_selector = false
+ pod_security_policy = false
+ pods_cidr = (known after apply)
+ services_cidr = (known after apply)
+ version = "1.28"
+ cloud {
+ openstack {
+ floating_ip_pool = (known after apply)
+ network = (known after apply)
+ security_group = (known after apply)
+ server_group_id = (known after apply)
+ subnet_cidr = (known after apply)
+ subnet_id = (known after apply)
}
}
}
}
# openstack_identity_application_credential_v3.app_credential will be created
+ resource "openstack_identity_application_credential_v3" "app_credential" {
+ description = "app credentials for cluster cilium-testcluster3"
+ id = (known after apply)
+ name = "cluster-cilium-testcluster3"
+ project_id = (known after apply)
+ region = (known after apply)
+ roles = (known after apply)
+ secret = (sensitive value)
+ unrestricted = false
}
Plan: 2 to add, 0 to change, 0 to destroy.
╷
│ Warning: Argument is deprecated
│
│ with provider["registry.terraform.io/terraform-provider-openstack/openstack"],
│ on <empty> line 0:
│ (source code not available)
│
│ Users not using loadbalancer resources can ignore this message. Support for neutron-lbaas will be removed on next major release. Octavia will be the only supported method for loadbalancer resources. Users
│ using octavia will have to remove 'use_octavia' option from the provider configuration block. Users using neutron-lbaas will have to migrate/upgrade to octavia.
╵
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
openstack_identity_application_credential_v3.app_credential: Creating...
openstack_identity_application_credential_v3.app_credential: Creation complete after 1s [id=]
╷
│ Warning: Argument is deprecated
│
│ with provider["registry.terraform.io/terraform-provider-openstack/openstack"],
│ on <empty> line 0:
│ (source code not available)
│
│ Users not using loadbalancer resources can ignore this message. Support for neutron-lbaas will be removed on next major release. Octavia will be the only supported method for loadbalancer resources. Users
│ using octavia will have to remove 'use_octavia' option from the provider configuration block. Users using neutron-lbaas will have to migrate/upgrade to octavia.
╵
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for metakube_cluster.cluster01 to include new values learned so far during apply, provider "registry.terraform.io/syseleven/metakube" produced an invalid new value for
│ .spec[0].cloud[0].openstack[0].application_credentials: block count changed from 0 to 1.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
Interesting, still can't reproduce it on mac with same versions. Could you maybe try with a valid cluster version. the current code throws:
│ Error: unknown version 1.28
│
│ with metakube_cluster.cluster01,
│ on main.tf line 30, in resource "metakube_cluster" "cluster01":
│ 30: version = "1.28"
│
│ Please select one of available versions: [1.24.10 1.24.11 1.24.12 1.24.13 1.24.14 1.24.15 1.24.17 1.25.7 1.25.8 1.25.9 1.25.10 1.25.11 1.25.13 1.25.14 1.26.5 1.26.6 1.26.8 1.26.9 1.26.11
│ 1.26.13 1.27.7 1.27.8 1.27.10 1.28.6]
The problem exists before even trying to create the cluster, the version number error was from me minifying my terraform file for github
if you run the minified code with the right version number, you still get the error?
Yes.
data "metakube_k8s_version" "k8s_version" {
major = 1
minor = 28
}
[...]
spec {
version = data.metakube_k8s_version.k8s_version.version
[...]
could you try removing the .terraform
and .terraform.d
folders?
I already debugged all of this, I even created a new terraform deployment/empty folder for the minified version.
I tried on a linux as well, and couldn't reproduce it. Could you try on a different machine?
will be fixed in v5.2.2
fixed.
I'm trying to create the application token on the fly and pass it to metacube to create the cluster
It works if I execute "terraform apply" twice, because it errors on the first run suggesting a provicer bug:
Tree:
modules/openstack/main.tf
modules.openstack/outputs.tf:
./main.tf:
Any Idea if this is the wrong way to do it in my code or if it is indeed a provider bug?