zitadel / zitadel-charts

This repository contains Helm charts for running ZITADEL in Kubernetes
https://zitadel.com
Apache License 2.0
66 stars 53 forks source link

Provision "root" jwt_token on installation #151

Open thomaspetit opened 8 months ago

thomaspetit commented 8 months ago

I'm looking into installing zitadel with the Helm chart but immediatly bootstrapping it with the terraform provider without any human interaction: https://registry.terraform.io/providers/zitadel/zitadel/latest/docs

As by the latest docs they require a token/jwt_file to be provisioned to connect to the Zitadel instance. Is there a workaround to execute the terraform provider without the need to login manually and generating a jwt token?

Per example, similar setups can be found here:

hifabienne commented 8 months ago

@eliobischof @stebenz can you answer this question?

bdalpe commented 5 months ago

@thomaspetit I believe this is what you're looking for: https://github.com/zitadel/zitadel-charts/blob/main/examples/6-machine-user/README.md

Which creates a secret named whatever you configure in .Values.zitadel.configmapConfig.FirstInstance.Org.Machine.Machine.Username

https://github.com/zitadel/zitadel-charts/blob/f21506f1f21308ef89ef047a224aa7c96a8c08c5/charts/zitadel/templates/setupjob.yaml#L108

thomaspetit commented 5 months ago

Awesome.. looks like exactly what I was looking for. I should have looked at the source code a bit better. 😅

Edit: Not 100% what I was l looking for on further insight. I already had configured the machine user. Sadly I can't specify the actual sa.json file that is created.

I currently have this:

zitadel:
  zitadel:
    masterkeySecretName: zitadel-masterkey
    configmapConfig:
      Log:
        Level: 'error'
      ExternalDomain: zitadel.k3s.tpcservices.be
      ExternalPort: 443
      ExternalSecure: true
      TLS:
        Enabled: false
      # Please note that you either chose human or machine!
      # https://github.com/zitadel/zitadel/blob/main/cmd/setup/steps.yaml#L35
      FirstInstance:
        Org:
          name: TPCSERVICES
          Machine:
            Machine:
              Username: zitadel-admin-sa
              Name: Admin
            MachineKey:
              # ExpirationDate: "2030-01-01T00:00:00Z"
              Type: 1

I can indeed specify the MachineKey properties but sadly not pass a self-created key that I pass to Zitadel

kervel commented 5 months ago

we fixed this by running the terraform provisioner also as a kubernetes job. it took some effort to get it running, but basically we mounted the generated secret as a volume in a Job that does "terraform apply".

i can share more details if you want. i think with some work it would be possible to integrate terraform provisioning in the helm chart (where you could just specify .Values.terraformScriptConfigmap or so)

thomaspetit commented 5 months ago

I'm actually also doing this (using the terraform operator from galleybytes) but it seems that there is no way to provision that zitadel-admin-sa.json? You found something for that? 😃

All help or ideas are welcome.

bdalpe commented 5 months ago

@thomaspetit my comment here might help you: https://github.com/zitadel/terraform-provider-zitadel/issues/167#issue-2197959352

I found that the Zitadel Terraform Provider tries to use the secret before it exists, so you have to do one of a few things: terragrunt apply, terraform apply -target helm_release.zitadel, or make two separate modules for the Helm release and Zitadel resources so that Terraform will correctly wait for the dependency to be resolved.

kervel commented 5 months ago

Hi thomas!

let me lay out my plan in a bit more detail. i guess you want to work "the other way around" but i wonder if that's really needed.

zitadel:
  configmapConfig:
    FirstInstance:
      Org:
        Machine:
          Machine:
            Username: zitadel-admin-sa
            Name: Admin
          MachineKey:
            ExpirationDate: "2026-01-01T00:00:00Z"
            # Type: 1 means JSON. This is currently the only supported machine key type.
            Type: 1
    ExternalDomain: zitadel.atlas.intern.kapernikov.com
    ExternalPort: 443
    ExternalSecure: true
    TLS:
      Enabled: false
  masterkey: x123x567890123456789012f4567891y

now, i want to deploy my application that uses zitadel. In my case its logical to have the zitadel config as part of the deployment procedure of my application rather than zitadel itself. I want it to be easy to deploy (create as much test instances as i want)

this also means that i want to be able to deploy+configure it when my https cert is missing or even when the DNS for my ingress is not yet good. Here i had some difficulties to tackle:

ingress:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    ## edit the nnginx ingress controller configmap to make sure snippets are allowed, after editing, kill the pod
    nginx.ingress.kubernetes.io/configuration-snippet: |
      grpc_set_header Host $host;
    cert-manager.io/cluster-issuer: selfsigned-ca-issuer

that works but is not ideal because by default nginx ingress controller doesn't allow for setting config snippets (you have to enable it when installing the controller).

Second difficulty: i now have to use the public ingress to connect to my zitadel instance. I'd rather connect using the internal service in kubernetes, because this is both more robust (works when the ingress is not fine yet for whatever reason) and more secure. But if i change the uri i also change the issuer because of https://github.com/zitadel/terraform-provider-zitadel/issues/143

Because i used a selfsigned cert i need to modify the terraform docker image, to autotrust my selfsigned cert:

openssl s_client -connect $TF_VAR_ZITADEL_DOMAIN:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /usr/local/share/ca-certificates/example.crt
update-ca-certificates

This would also not be a problem if i could connect using plain http (it is intra-cluster anyway) but that doesn't work due to the fact that then i also change the issuer. So i think (without more support from zitadel) the good way would be to add a sidecar to the terraform container that acts as a proxy. This way i don't have to use grpc over the ingress, and i can modify the "Host" header so that it matches the issuer in the zitadel configuration.

In the deployment yaml of my job, i also mount the secret of the admin user so terraform can access it (i guess that's not the way you want to do it). This has a disadvantage: zitadel needs to run in the same namespace as my app. But there are secret copier operators that could alleviate that.

I don't use the operator, i use a Job as part of the postinstall of my own helmchart. So i'm free to add a sidecar. But i don't know if the terraform operator would allow that.

Greetings, Frank

eliobischof commented 2 months ago

@thomaspetit my comment here might help you: zitadel/terraform-provider-zitadel#167 (comment)

I found that the Zitadel Terraform Provider tries to use the secret before it exists, so you have to do one of a few things: terragrunt apply, terraform apply -target helm_release.zitadel, or make two separate modules for the Helm release and Zitadel resources so that Terraform will correctly wait for the dependency to be resolved.

Could this issue actually be closed if we implemented zitadel/terraform-provider-zitadel#167 (comment)