mrparkers / terraform-provider-keycloak

Terraform provider for Keycloak
https://registry.terraform.io/providers/mrparkers/keycloak/latest/docs
MIT License
600 stars 292 forks source link

Keycloak provider tries to initialize without waiting for dependent resources #375

Open jankosz opened 3 years ago

jankosz commented 3 years ago

Hi,

In our project keycloak instance is provisioned by terraform. At that time a random admin password is created. I want to use this password to initialize mrparkers/keycloak provider.

Here is simplified config we are using: `provider "keycloak" { client_id = "admin-cli" username = "admin" password = random_password.password.result

initial_login = false tls_insecure_skip_verify = true

url = "https://localhost" }

resource "keycloak_realm" "realm" { realm = "test_realm" enabled = true display_name = "Test realm" display_name_html = "Test realm" }

resource "random_password" "password" { length = 16 special = true overridespecial = "%@" }`

Unfortunately that gives me following error on terraform plan/apply/destroy:

Error: error initializing keycloak provider must specify client id, username and password for password grant, or client id and secret for client credentials grant

Provider don't wait for the password to be created. Is there any workaround to use randomly generated passwords?

Best regards, Maciej

dmeyerholt commented 3 years ago

You can only use input variables inside provider configuration. Workaround would be multistage states. First provision keycloak and then use a second terraform state that can use that provisioning state as input. See Here

mrparkers commented 3 years ago

Yeah, this unfortunately isn't something you can do in a single run of terraform apply. Provider initialization will always happen first, and if the random_password resource doesn't exist by then, the password attribute for the provider will be an empty string, so all Keycloak API calls will be guaranteed to fail anyways.

I actually do something similar in the environment I run Keycloak in, but instead of using random_password, I store the password in Vault and use the vault_generic_secret resource to pull it out. Not sure if that helps here, but you might be able to do something like that to move forward.

Regardless, if you create the random_password first, then add the provider block, you should be fine.

DuncanvR commented 3 years ago

That isn't the case for all providers, e.g. the official one for Kubernetes. I have a working configuration in which I create a Kubernetes cluster in Azure and use its output in the provider block for connecting to Kubernetes. Then within the cluster I create a deployment of Keycloak, which I was hoping to manage using this provider.

The config looks something like the following (some details left out for brevity):

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.42.0"
    }
    kubernetes = {
      source  = "hasicorp/kubernetes"
      version = "=1.13.0"
    }
    keycloak = {
      source  = "mrparkers/keycloak"
      version = "=2.1.0"
    }
  }
  required_version = "=0.14.4"
}

provider "azurerm" { ... }

resource "azurerm_kubernetes_cluster" "aks" { ... }

provider "kubernetes" {
  client_certificate     = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
  host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
  password               = azurerm_kubernetes_cluster.aks.kube_config.0.password
  username               = azurerm_kubernetes_cluster.aks.kube_config.0.username
  load_config_file       = false
}

resource "random_password" "keycloak_admin" {
  length           = 24
  special          = true
  override_special = "_%@"
}

resource "kubernetes_secret" "keycloak_env" {
  metadata {
    name = "keycloak-env"
  }
  data = {
    DB_ADDR           = var.database_server_hostname
    DB_DATABASE       = var.keycloak_database_name
    DB_PASSWORD       = var.keycloak_database_password
    DB_USER           = var.keycloak_database_username
    KEYCLOAK_PASSWORD = random_password.keycloak_admin.result
    KEYCLOAK_USER     = "admin"
  }
}

resource "kubernetes_deployment" "keycloak" {
  metadata {
    name = "keycloak"
  }
  spec {
    template {
      spec {
        container {
          image = "quay.io/keycloak/keycloak:11.0.3"
          name  = "keycloak"
          env_from {
            secret_ref {
              name = "keycloak-env"
            }
          }
        }
      }
    }
  }
}

Up to this point everything works. Starting from scratch, a single terraform apply will create the cluster and spin up a Keycloak pod. With an additional service and ingress I can login in to the management pane using the admin password generated by Terraform. The next bit, however, does not work, giving the same error as mentioned by @jankosz.

provider "keycloak" {
  client_id     = "admin_cli"
  initial_login = false
  password      = random_password.keycloak_admin.result
  url           = var.keycloak_url_within_cluster
  username      = "admin"
}

resource "keycloak_realm" "realm" { ... }

@mrparkers, would you happen to know what the Kubernetes provider does differently that allows it to accept such a dynamic configuration?