hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.92k stars 1.95k forks source link

Nomad env block interpolation with Nomad variable does not work. #18098

Closed GunnarMorrigan closed 1 year ago

GunnarMorrigan commented 1 year ago

Nomad version

Output from nomad version

Nomad v1.6.1 BuildDate 2023-07-21T13:49:42Z Revision 515895c7690cdc72278018dc5dc58aca41204ccc

Operating system and Environment details

RHEL azure vm 2 vcpus 8gb ram

Single server node one client node. Test setup

Issue

Nomad variables are not properly interpolated. Variables: image

job specs

job "env" {
  datacenters = ["dc1"]
  type        = "batch"

  group "env" {
    task "env" {
      driver = "docker"

      config {
        image   = "alpine:3.15"
        command = "/bin/sh"
        args    = ["-c", "env | grep MQTT"]
      }

      env {
        MQTT_IP = "{{env mqtt_ip}}"
        MQTT_PORT = "${mqtt_port}"
        MQTT_NAME = "${node.unique.name}"
      }

    }
  }
}

From here: https://discuss.hashicorp.com/t/nomad-job-spec-environment-variable-best-practices/32736/5

Output:

MQTT_IP={{env mqtt_ip}}
MQTT_PORT=${mqtt_port}
MQTT_NAME=PROPER_RESULT_HERE

Reproduction steps

Use same config as above.

Expected Result

Variables should be interpolated properly. MQTT_IP="{{env mqtt_ip}}" is probably not the correct syntax. But MQTT_PORT="${mqtt_port}" is correct. This env variable should be replaced properly.

Actual Result

The direct string that should be interpolated is taken.

Job file (if appropriate)

Other maybe related issue:

I am also having trouble using DOCKER_USERNAME and DOCKER_PASSWORD in the auth username and password fields:

config {
    image = "image:0.1"
    auth {
        username = "${DOCKER_USERNAME}"
        password = "${DOCKER_PASSWORD}"
    }
}

This produces the error: 3 errors occurred: * failed to parse config: * Unknown variable: There is no variable named "DOCKER_USERNAME". * Unknown variable: There is no variable named "DOCKER_PASSWORD".

What is going on here?

tgross commented 1 year ago

@GunnarMorrigan Nomad doesn't expose Variables to the environment automatically (although that would be a cool feature). As shown in that Discuss thread, you need a template block to fetch the Variables and use the env flag to expose them to the task's environment.

GunnarMorrigan commented 1 year ago

@tgross thank you for your reply. What would the content of the template block look like? The range and ls function seem to be from consul and I do not use consul just plain nomad. I just want all variables in nomad/jobs to be available for usage in my job specs.

Nodes will be connected through a protected tailscale/netmaker private network.

Edit1: I tried this approach with no success:

      env {
        MQTT_IP = "${mqtt_ip}"
        MQTT_PORT = "${mqtt_port}"
        MQTT_NAME = "${node.unique.name}"
      }
      template {
      data        = <<EOH
{{ with nomadVar "nomad/jobs" }}
mqtt_ip={{ .mqtt_ip }}
mqtt_port={{ .mqtt_port }}
{{ end }}
EOH
        destination = "local/env"
        env         = true
      }

Edit2: There was some success. The files from the template block are written to local/env. But I just want them in the actual env. The job still sets the env variable MQTT_IP equal to ${mqtt_ip} literally. This is not something I want. I want nomad to have replaced the ${mqtt_ip} with the value that it has itself.

Edit3:

Oke with the above stanza(?). It kinda works but not the way I want it to. The env block items are not interpolated at all as I expected. This seems like an oversight to me.

The env contains:

MQTT_IP=${mqtt_ip}
mqtt_ip=100.XXX.YYY.ZZZ
mqtt_port=1883
MQTT_PORT=${mqtt_port}
MQTT_NAME=CLIENT_NAME

I would expect nomad to:

This also explains why I can't get to my DOCKER_USERNAME variables.

AAverin commented 1 year ago

@GunnarMorrigan could you please share how did you resolve the issue or reopen if the issue is still not solved? I have the exact problem and not sure how to authenticate to private docker container registry correctly

AAverin commented 1 year ago

@tgross maybe you can help me out here is a gist of what I have

I set my variables with

nomad var put -force nomad/jobs/config \
    github_cr_username=$GITHUB_CR_USERNAME \
    github_cr_token=$GITHUB_CR_TOKEN

and have this job configured in nomad-pack

job "myjob" {
  group "mygroup" {
    task "mytask" {
       driver = "docker"
       config {
          image = "ghcr.io/myorg/myimage:latest"
          ports = ["http"]

          auth {
            username = "$GITHUB_CR_USERNAME"
            password = "$GITHUB_CR_TOKEN"
          }
        }

        template {
          destination = "${NOMAD_SECRETS_DIR}/env.vars"
          env         = true
          change_mode = "restart"
          data        = <<EOH
  {{- with nomadVar "nomad/jobs/config" -}}
  GITHUB_CR_USERNAME = {{ .github_cr_username }}
  GITHUB_CR_TOKEN = {{ .github_cr_token }}
  {{- end -}}
EOH
        }
      }
    }
  }

Somehow authentication doesn't work for ghcr.

What would be a good way to debug if environment was set when the job was evaluated? What would be the right way to do this kind of authentication?

GunnarMorrigan commented 1 year ago

@tgross maybe you can help me out here is a gist of what I have

I set my variables with

nomad var put -force nomad/jobs/config \
    github_cr_username=$GITHUB_CR_USERNAME \
    github_cr_token=$GITHUB_CR_TOKEN

and have this job configured in nomad-pack

job "myjob" {
  group "mygroup" {
    task "mytask" {
       driver = "docker"
       config {
          image = "ghcr.io/myorg/myimage:latest"
          ports = ["http"]

          auth {
            username = "$GITHUB_CR_USERNAME"
            password = "$GITHUB_CR_TOKEN"
          }
        }

        template {
          destination = "${NOMAD_SECRETS_DIR}/env.vars"
          env         = true
          change_mode = "restart"
          data        = <<EOH
  {{- with nomadVar "nomad/jobs/config" -}}
  GITHUB_CR_USERNAME = {{ .github_cr_username }}
  GITHUB_CR_TOKEN = {{ .github_cr_token }}
  {{- end -}}
EOH
        }
      }
    }
  }

Somehow authentication doesn't work for ghcr.

What would be a good way to debug if environment was set when the job was evaluated? What would be the right way to do this kind of authentication?

@AAverin I was able to get normal env variables into the container env. Maybe for the username and password you can try something like:

auth {
     username = <<EOH
{{- with nomadVar "nomad/jobs/config" -}}
{{ .github_cr_username }}
password = {{ .github_cr_token }}
{{- end -}}
EOH
}

I tried this last night but wasnt able to actually run it. Local pc only has WSL and in that nomad did not recognize docker driver

tgross commented 1 year ago

What would be a good way to debug if environment was set when the job was evaluated?

You can debug by running a container image like busybox:1 and just running env as the command and checking the logs.

What would be the right way to do this kind of authentication?

Keep in mind that the Nomad client (and the task driver) is talking to Docker over the Docker API. And it's Docker itself that authenticates to the Docker registry. The Docker API doesn't pass the environment variables we set in the task driver along, and instead there's a Docker auth helper you need to have configured.

You can configure that in the auth block in the plugin "docker" {} configuration block on the client. What do you have configured there?

AAverin commented 1 year ago

@tgross I don't have any special configuration set for docker on the client. my expectation would be that if I have a cluster of multiple clients running on nomad, I should be able to run a custom docker image coming from a private github container registry by providing authentication for the docker in the task. Is this a wrong expectation?

I am following the documentation here: https://developer.hashicorp.com/nomad/docs/drivers/docker#authentication If you want to pull from a private repo (for example on dockerhub or quay.io), you will need to specify credentials in your job

tgross commented 1 year ago

:facepalm: You're right, I was reading that list as "AND" when it's clearly "OR". Ok, so I'd try to debug the credentials as I noted above.

AAverin commented 1 year ago

@tgross Which variables can I check if I have this configuration? I can see correct values set as variables in jobs/config, but authentication still fails with API error (500): Head "https://ghcr.io/v2/myorg/myimage/manifests/latest": denied

config {
        image = "ghcr.io/myorg/myimage:latest"
        ports = ["http"]

        auth {
          username = <<EOH
{{- with nomadVar "nomad/jobs/config" -}}
{{ .github_cr_username }}
{{- end -}}
EOH
          password = <<EOH
{{- with nomadVar "nomad/jobs/config" -}}
{{ .github_cr_token }}
{{- end -}}
EOH
        }
      }
AAverin commented 1 year ago

I know that my credentials are correct because they work if I hardcode them. So the issue is around passing those credentials safely somehow

tgross commented 1 year ago

You can't do templating like that inside the auth block, only inside a template block. What I intended to suggest was that you should verify the template block is rendering correctly by going and looking at the output to ensure the credentials are there.

AAverin commented 1 year ago

Ok, so the issue is in my template code. Somehow 2 lines got merged into a single line. How do I add a line break?

{{- with nomadVar "nomad/jobs/config" -}}
GITHUB_CR_USERNAME = {{ .github_cr_username }}
GITHUB_CR_TOKEN = {{ .github_cr_token }}
{{- end -}}
[[- if .my.database_is_local ]]
{{- with nomadVar "nomad/jobs/db_mariadb" -}}
MARIADB_DATABASE = {{ .database }}
MARIADB_USER = {{ .username }}
MARIADB_PASSWORD = {{ .password }}
DATABASE_KIND = "MariaDB"
{{- end -}}
{{ range nomadService [[ .my.mariadb_nomad_service_name | quote ]] }}
MARIADB_HOST="{{ .Address }}"
MARIADB_PORT="{{ .Port }}"
{{ end }}
[[- else]]
{{- with nomadVar "nomad/jobs/db_postgress" -}}
POSTGRESS_DATABASE = {{ .database }}
POSTGRESS_USER = {{ .username }}
POSTGRESS_PASSWORD = {{ .password }}
DATABASE_KIND = "Postgress"
{{- end -}}
[[- end ]]

This resulted in GITHUB_CR_TOKEN=mytokenMARIADB_DATABASE = mydatabase

tgross commented 1 year ago

Oh, ok. That's likely because of the dash you're using with the delimiters (ex. {{- instead of {{), which removes whitespace including newlines. So here's an example of a template that gets bad results:

bad template ```hcl job "example" { group "web" { task "http" { driver = "docker" config { image = "busybox:1" command = "httpd" args = ["-vv", "-f", "-p", "8001", "-h", "/local"] } template { data = <

That renders like this:

$ nomad alloc fs 01c6 http/local/test.txt
GITHUB_CR_USERNAME = username
GITHUB_CR_TOKEN = password1MARIADB_DATABASE = db1
MARIADB_USER = admin
MARIADB_PASSWORD = password2%

And then the same job with a template with the {{-end-}} replaced by {{end}} where appropriate:

good template ```hcl job "example" { group "web" { task "http" { driver = "docker" config { image = "busybox:1" command = "httpd" args = ["-vv", "-f", "-p", "8001", "-h", "/local"] } template { data = <

Shows as:

$ nomad alloc fs e7b6 http/local/test.txt
GITHUB_CR_USERNAME = username
GITHUB_CR_TOKEN = password1
MARIADB_DATABASE = db1
MARIADB_USER = admin
MARIADB_PASSWORD = password2%
NiklasPor commented 6 months ago

Hey all, just so that I get it right – there's no workaround to get the ${} syntax working (in template or env blocks) – when other variables are referenced?

I'm trying to create a nomad deployment version of Supabase, but the huge .env file which values are used all over in the compose file makes it very hard.

Has anyone a workaround to get templates like this:

{{- with nomadVar "general" }}

PRODUCT_NAME="{{ .product }}"
API_HOST="127.0.0.1"
API_URL="http://${API_HOST}"
PRODUCT_API_URL="${API_URL}/products/${PRODUCT_NAME}"

{{- end }}

to be resolved to environment variables like this:

PRODUCT_NAME="cool-product"
API_HOST="127.0.0.1"
API_URL="http://127.0.0.1"
PRODUCT_API_URL="http://127.0.0.1/products/cool-product"
tgross commented 6 months ago

@NiklasPor please do us a favor and post new issues rather than adding questions to closed ones.

But no, the template language isn't evaluated as a shell script and isn't recursively evaluated in any case. A template block like this:

      template {
        data = <<EOT
{{- with nomadVar "nomad/jobs/example" }}

PRODUCT_NAME="{{ .product }}"
API_HOST="127.0.0.1"
API_URL="http://${API_HOST}"
PRODUCT_API_URL="${API_URL}/products/${PRODUCT_NAME}"

{{- end }}

EOT
        destination = "local/foo.env"
      }

gets resolved to:

$ nomad var put nomad/jobs/example product=foo

$ nomad alloc exec 88d0 cat /local/foo.env
PRODUCT_NAME="foo"
API_HOST="127.0.0.1"
API_URL="http://${API_HOST}"
PRODUCT_API_URL="${API_URL}/products/${PRODUCT_NAME}"

Which is what I'd expect because it allows the allocation workload to resolve the resulting shell script itself (via whatever shell happens to ship in the container image, if any). If you want to have all the variables interpolated before the task starts, you'd need to use the template language to do that. Like so:

      template {
        data = <<EOT
{{- with nomadVar "nomad/jobs/example" -}}
PRODUCT_API_URL="{{ .api_url }}/products/{{ .product }}"

{{- end -}}
EOT
        destination = "local/foo.env"
      }
$ nomad var put nomad/jobs/example api_url="http://127.0.0.1" product=foo

$ nomad alloc exec c155 cat /local/foo.env
PRODUCT_API_URL="http://127.0.0.1/products/foo"
AAverin commented 6 months ago

What worked find for me is a post-processing script doing something like:

PUBLIC_IP=$PUBLIC_IP envsubst < cwd_front.tfvars.tpl > cwd_front.tfvars

basically substituting env variables with a bash script in my tfvars files and then feeding those into nomad