Closed dansteen closed 5 years ago
Also could be related to #4226
It looks like this bug has been fixed as part of the client refactoring work in 0.9, given the following job file:
job "example" {
datacenters = ["dc1"]
type = "batch"
group "cache" {
count = 1
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
ephemeral_disk {
size = 300
}
task "redis" {
driver = "raw_exec"
config {
command = "bash"
args = ["-c", "env; sleep 1000"]
}
resources {
network {
mbits = 10
port "db" {}
}
}
service {
name = "redis-cache"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
template {
data = "---\nkey: {{ key \"foo\" }}"
destination = "local/file.yml"
change_mode = "restart"
}
}
}
}
I get the following after changing consul values:
[nomad(b-consul-template)] $ nomad status 76fc88df
ID = 76fc88df
Eval ID = a8ed26c9
Name = example.cache[0]
Node ID = 22bca3ea
Job ID = example
Job Version = 0
Client Status = running
Client Description = Tasks are running
Desired Status = run
Desired Description = <none>
Created = 30s ago
Modified = 2s ago
Task "redis" is "running"
Task Resources
CPU Memory Disk IOPS Addresses
26/100 MHz 22 MiB/300 MiB 300 MiB 0 db: 127.0.0.1:26380
Task Events:
Started At = 2018-11-21T15:04:23Z
Finished At = N/A
Total Restarts = 2
Last Restart = 2018-11-21T16:04:23+01:00
Recent Events:
Time Type Description
2018-11-21T16:04:23+01:00 Started Task started by client
2018-11-21T16:04:23+01:00 Restarting Task restarting in 0s
2018-11-21T16:04:18+01:00 Restarting Template with change_mode restart re-rendered
2018-11-21T16:03:55+01:00 Started Task started by client
2018-11-21T16:03:55+01:00 Task Setup Building Task Directory
change_mode = "restart" is not good idea ... If config have errors service fail.
My configs
template {
data = "{{ range ls \"fedsp/apache2/sites-enabled\" }} {{ .Value }} \n\n {{ end }}"
destination = "sites.conf"
change_mode = "signal"
change_signal = "SIGUSR1"
}
file don't update :(
Closing this issue because the root issue is fixed in 0.9
To what extend is it know if this is really fixed?
I have the following template:
template {
data = "{{with secret \"secret/kv/certificate/domain\"}}{{.Data.privkey}}{{end}}"
destination = "secrets/cert.key"
change_mode = "signal"
change_signal = "SIGHUP"
}
template {
data = "{{with secret \"secret/kv/certificate/domain\"}}{{.Data.fullchain}}{{end}}"
destination = "secrets/cert.crt"
change_mode = "signal"
change_signal = "SIGHUP"
}
The key is updated in Vault, but the template is not updated in job files and there is also no SIGHUP sent as indicated by the change_mode
.
@frederikbosch Vault templates don't refresh when the Vault secret changes; they refresh based on the TTL of the secret. The rest of this issue was about Consul changes (which do change when the Consul data changes).
I can't directly link to the right section of consul-template's README so I'll paste here:
Please note that Vault does not support blocking queries. As a result, Consul Template will not immediately reload in the event a secret is changed as it does with Consul's key-value store. Consul Template will renew the secret with Vault's Renewer API. The Renew API tries to use most of the time the secret is good, renewing at around 90% of the lease time (as set by Vault).
(This bit me too in the past, so you're not alone!)
@wlonkly I found that one out in the meanwhile. If you are using a V1 KV, you can create a secret called TTL with the value set to an int (seconds), and then that is used as TTL by Nomad / Consul Template.
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Nomad version
Nomad v0.8.6 (ab54ebcfcde062e9482558b7c052702d4cb8aa1b+CHANGES)
Operating system and Environment details
debian 9.3
Issue
We have an issue where dynamic credentials generated by vault will get updated, and the file that the template will write to is updated, but the application is not restarted.
An example of this is the following allocation. Notice that the alloc modified time has not changed since it was created, and that no app restart has taken place since it was created on the 2018-10-03:
But, if i dig into the allocation folder, I see that the file that the template writes to has been updated on 2018-10-10. Per the job file (below) if that file is updated, it should trigger a restart of the application:
Reproduction steps
Given the following terraform config:
and the template file
consul-policy-traefik.tpl
referenced above:and the template file
vault-policy-traefik.tpl
referenced above:Job file
The following nomad job will not restart the command when the template data changes: