Closed GunnarMorrigan closed 1 year ago
@GunnarMorrigan Nomad doesn't expose Variables to the environment automatically (although that would be a cool feature). As shown in that Discuss thread, you need a template
block to fetch the Variables and use the env
flag to expose them to the task's environment.
@tgross thank you for your reply. What would the content of the template block look like? The range and ls function seem to be from consul and I do not use consul just plain nomad. I just want all variables in nomad/jobs to be available for usage in my job specs.
Nodes will be connected through a protected tailscale/netmaker private network.
Edit1: I tried this approach with no success:
env {
MQTT_IP = "${mqtt_ip}"
MQTT_PORT = "${mqtt_port}"
MQTT_NAME = "${node.unique.name}"
}
template {
data = <<EOH
{{ with nomadVar "nomad/jobs" }}
mqtt_ip={{ .mqtt_ip }}
mqtt_port={{ .mqtt_port }}
{{ end }}
EOH
destination = "local/env"
env = true
}
Edit2:
There was some success. The files from the template block are written to local/env
. But I just want them in the actual env. The job still sets the env variable MQTT_IP
equal to ${mqtt_ip}
literally. This is not something I want. I want nomad to have replaced the ${mqtt_ip}
with the value that it has itself.
Edit3:
Oke with the above stanza(?). It kinda works but not the way I want it to. The env block items are not interpolated at all as I expected. This seems like an oversight to me.
The env contains:
MQTT_IP=${mqtt_ip}
mqtt_ip=100.XXX.YYY.ZZZ
mqtt_port=1883
MQTT_PORT=${mqtt_port}
MQTT_NAME=CLIENT_NAME
I would expect nomad to:
${ ... }
in the job spec after replacing all possible items. Maybe just fail the spec directly if this is the case.This also explains why I can't get to my DOCKER_USERNAME variables.
@GunnarMorrigan could you please share how did you resolve the issue or reopen if the issue is still not solved? I have the exact problem and not sure how to authenticate to private docker container registry correctly
@tgross maybe you can help me out here is a gist of what I have
I set my variables with
nomad var put -force nomad/jobs/config \
github_cr_username=$GITHUB_CR_USERNAME \
github_cr_token=$GITHUB_CR_TOKEN
and have this job configured in nomad-pack
job "myjob" {
group "mygroup" {
task "mytask" {
driver = "docker"
config {
image = "ghcr.io/myorg/myimage:latest"
ports = ["http"]
auth {
username = "$GITHUB_CR_USERNAME"
password = "$GITHUB_CR_TOKEN"
}
}
template {
destination = "${NOMAD_SECRETS_DIR}/env.vars"
env = true
change_mode = "restart"
data = <<EOH
{{- with nomadVar "nomad/jobs/config" -}}
GITHUB_CR_USERNAME = {{ .github_cr_username }}
GITHUB_CR_TOKEN = {{ .github_cr_token }}
{{- end -}}
EOH
}
}
}
}
Somehow authentication doesn't work for ghcr.
What would be a good way to debug if environment was set when the job was evaluated? What would be the right way to do this kind of authentication?
@tgross maybe you can help me out here is a gist of what I have
I set my variables with
nomad var put -force nomad/jobs/config \ github_cr_username=$GITHUB_CR_USERNAME \ github_cr_token=$GITHUB_CR_TOKEN
and have this job configured in nomad-pack
job "myjob" { group "mygroup" { task "mytask" { driver = "docker" config { image = "ghcr.io/myorg/myimage:latest" ports = ["http"] auth { username = "$GITHUB_CR_USERNAME" password = "$GITHUB_CR_TOKEN" } } template { destination = "${NOMAD_SECRETS_DIR}/env.vars" env = true change_mode = "restart" data = <<EOH {{- with nomadVar "nomad/jobs/config" -}} GITHUB_CR_USERNAME = {{ .github_cr_username }} GITHUB_CR_TOKEN = {{ .github_cr_token }} {{- end -}} EOH } } } }
Somehow authentication doesn't work for ghcr.
What would be a good way to debug if environment was set when the job was evaluated? What would be the right way to do this kind of authentication?
@AAverin I was able to get normal env variables into the container env. Maybe for the username and password you can try something like:
auth {
username = <<EOH
{{- with nomadVar "nomad/jobs/config" -}}
{{ .github_cr_username }}
password = {{ .github_cr_token }}
{{- end -}}
EOH
}
I tried this last night but wasnt able to actually run it. Local pc only has WSL and in that nomad did not recognize docker driver
What would be a good way to debug if environment was set when the job was evaluated?
You can debug by running a container image like busybox:1
and just running env
as the command and checking the logs.
What would be the right way to do this kind of authentication?
Keep in mind that the Nomad client (and the task driver) is talking to Docker over the Docker API. And it's Docker itself that authenticates to the Docker registry. The Docker API doesn't pass the environment variables we set in the task driver along, and instead there's a Docker auth helper you need to have configured.
You can configure that in the auth
block in the plugin "docker" {}
configuration block on the client. What do you have configured there?
@tgross I don't have any special configuration set for docker on the client. my expectation would be that if I have a cluster of multiple clients running on nomad, I should be able to run a custom docker image coming from a private github container registry by providing authentication for the docker in the task. Is this a wrong expectation?
I am following the documentation here: https://developer.hashicorp.com/nomad/docs/drivers/docker#authentication
If you want to pull from a private repo (for example on dockerhub or quay.io), you will need to specify credentials in your job
:facepalm: You're right, I was reading that list as "AND" when it's clearly "OR". Ok, so I'd try to debug the credentials as I noted above.
@tgross
Which variables can I check if I have this configuration? I can see correct values set as variables in jobs/config
, but authentication still fails with API error (500): Head "https://ghcr.io/v2/myorg/myimage/manifests/latest": denied
config {
image = "ghcr.io/myorg/myimage:latest"
ports = ["http"]
auth {
username = <<EOH
{{- with nomadVar "nomad/jobs/config" -}}
{{ .github_cr_username }}
{{- end -}}
EOH
password = <<EOH
{{- with nomadVar "nomad/jobs/config" -}}
{{ .github_cr_token }}
{{- end -}}
EOH
}
}
I know that my credentials are correct because they work if I hardcode them. So the issue is around passing those credentials safely somehow
You can't do templating like that inside the auth
block, only inside a template
block. What I intended to suggest was that you should verify the template
block is rendering correctly by going and looking at the output to ensure the credentials are there.
Ok, so the issue is in my template code. Somehow 2 lines got merged into a single line. How do I add a line break?
{{- with nomadVar "nomad/jobs/config" -}}
GITHUB_CR_USERNAME = {{ .github_cr_username }}
GITHUB_CR_TOKEN = {{ .github_cr_token }}
{{- end -}}
[[- if .my.database_is_local ]]
{{- with nomadVar "nomad/jobs/db_mariadb" -}}
MARIADB_DATABASE = {{ .database }}
MARIADB_USER = {{ .username }}
MARIADB_PASSWORD = {{ .password }}
DATABASE_KIND = "MariaDB"
{{- end -}}
{{ range nomadService [[ .my.mariadb_nomad_service_name | quote ]] }}
MARIADB_HOST="{{ .Address }}"
MARIADB_PORT="{{ .Port }}"
{{ end }}
[[- else]]
{{- with nomadVar "nomad/jobs/db_postgress" -}}
POSTGRESS_DATABASE = {{ .database }}
POSTGRESS_USER = {{ .username }}
POSTGRESS_PASSWORD = {{ .password }}
DATABASE_KIND = "Postgress"
{{- end -}}
[[- end ]]
This resulted in GITHUB_CR_TOKEN=mytokenMARIADB_DATABASE = mydatabase
Oh, ok. That's likely because of the dash you're using with the delimiters (ex. {{-
instead of {{
), which removes whitespace including newlines. So here's an example of a template that gets bad results:
That renders like this:
$ nomad alloc fs 01c6 http/local/test.txt
GITHUB_CR_USERNAME = username
GITHUB_CR_TOKEN = password1MARIADB_DATABASE = db1
MARIADB_USER = admin
MARIADB_PASSWORD = password2%
And then the same job with a template with the {{-end-}}
replaced by {{end}}
where appropriate:
Shows as:
$ nomad alloc fs e7b6 http/local/test.txt
GITHUB_CR_USERNAME = username
GITHUB_CR_TOKEN = password1
MARIADB_DATABASE = db1
MARIADB_USER = admin
MARIADB_PASSWORD = password2%
Hey all, just so that I get it right – there's no workaround to get the ${}
syntax working (in template or env blocks) – when other variables are referenced?
I'm trying to create a nomad deployment version of Supabase, but the huge .env file which values are used all over in the compose file makes it very hard.
Has anyone a workaround to get templates like this:
{{- with nomadVar "general" }}
PRODUCT_NAME="{{ .product }}"
API_HOST="127.0.0.1"
API_URL="http://${API_HOST}"
PRODUCT_API_URL="${API_URL}/products/${PRODUCT_NAME}"
{{- end }}
to be resolved to environment variables like this:
PRODUCT_NAME="cool-product"
API_HOST="127.0.0.1"
API_URL="http://127.0.0.1"
PRODUCT_API_URL="http://127.0.0.1/products/cool-product"
@NiklasPor please do us a favor and post new issues rather than adding questions to closed ones.
But no, the template language isn't evaluated as a shell script and isn't recursively evaluated in any case. A template block like this:
template {
data = <<EOT
{{- with nomadVar "nomad/jobs/example" }}
PRODUCT_NAME="{{ .product }}"
API_HOST="127.0.0.1"
API_URL="http://${API_HOST}"
PRODUCT_API_URL="${API_URL}/products/${PRODUCT_NAME}"
{{- end }}
EOT
destination = "local/foo.env"
}
gets resolved to:
$ nomad var put nomad/jobs/example product=foo
$ nomad alloc exec 88d0 cat /local/foo.env
PRODUCT_NAME="foo"
API_HOST="127.0.0.1"
API_URL="http://${API_HOST}"
PRODUCT_API_URL="${API_URL}/products/${PRODUCT_NAME}"
Which is what I'd expect because it allows the allocation workload to resolve the resulting shell script itself (via whatever shell happens to ship in the container image, if any). If you want to have all the variables interpolated before the task starts, you'd need to use the template language to do that. Like so:
template {
data = <<EOT
{{- with nomadVar "nomad/jobs/example" -}}
PRODUCT_API_URL="{{ .api_url }}/products/{{ .product }}"
{{- end -}}
EOT
destination = "local/foo.env"
}
$ nomad var put nomad/jobs/example api_url="http://127.0.0.1" product=foo
$ nomad alloc exec c155 cat /local/foo.env
PRODUCT_API_URL="http://127.0.0.1/products/foo"
What worked find for me is a post-processing script doing something like:
PUBLIC_IP=$PUBLIC_IP envsubst < cwd_front.tfvars.tpl > cwd_front.tfvars
basically substituting env variables with a bash script in my tfvars files and then feeding those into nomad
Nomad version
Output from
nomad version
Nomad v1.6.1 BuildDate 2023-07-21T13:49:42Z Revision 515895c7690cdc72278018dc5dc58aca41204ccc
Operating system and Environment details
RHEL azure vm 2 vcpus 8gb ram
Single server node one client node. Test setup
Issue
Nomad variables are not properly interpolated. Variables:
job specs
From here: https://discuss.hashicorp.com/t/nomad-job-spec-environment-variable-best-practices/32736/5
Output:
Reproduction steps
Use same config as above.
Expected Result
Variables should be interpolated properly.
MQTT_IP="{{env mqtt_ip}}"
is probably not the correct syntax. ButMQTT_PORT="${mqtt_port}"
is correct. This env variable should be replaced properly.Actual Result
The direct string that should be interpolated is taken.
Job file (if appropriate)
Other maybe related issue:
I am also having trouble using DOCKER_USERNAME and DOCKER_PASSWORD in the auth username and password fields:
This produces the error:
3 errors occurred: * failed to parse config: * Unknown variable: There is no variable named "DOCKER_USERNAME". * Unknown variable: There is no variable named "DOCKER_PASSWORD".
What is going on here?