hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.76k stars 1.94k forks source link

Nomad v 1.5.2 tracks dead allocations #16739

Closed suikast42 closed 1 year ago

suikast42 commented 1 year ago

After I restart the nomad cluster consul tries to make healthchecks on not existing services.

As I can see in the nomad log, nomad kills an allocation during start and does not deregister it in consul.

The UUID of the dead allocation is df9c5c6e-a682-9014-f2b6-c4dad387b33.

The snippet of nomad log

2023-03-30T22:59:30+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_migrator: waiting for previous alloc to terminate: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 previous_alloc=df9c5c6e-a682-9014-f2b6-c4dad387b330
2023-03-30T22:59:30+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_migrator: waiting for previous alloc to terminate: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 previous_alloc=df9c5c6e-a682-9014-f2b6-c4dad387b330

image

Restart of the nomad systemd service solves the problem. Then the dead allocations disappears.

This issue comes with the version 1.5.2 a downgrade to 1.5.1 solves the issue as well

Nomad version

Nomad v1.5.2 BuildDate 2023-03-21T22:54:38Z Revision 9a2fdb5f53dce81edf2802f0b64962e07596fd03

Consul version

Consul v1.15.1 Revision 7c04b6a0 Build Date 2023-03-07T20:35:33Z Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

Nomad and Consul logs

Dead Allocation: df9c5c6e-a682-9014-f2b6-c4dad387b330 New Allocation: 0cafa163-5810-a78e-28dc-e081ebfcded6

2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type=Received msg="Task received by client" failed=false
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type=Received msg="Task received by client" failed=false
2023-03-30T22:58:10+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-df9c5c6e-a682-9014-f2b6-c4dad387b330-group-nats-nats-prometheus-exporter-prometheus-exporter
2023-03-30T22:58:10+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-df9c5c6e-a682-9014-f2b6-c4dad387b330-group-nats-nats-client
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โš ] client.alloc_runner.task_runner.task_hook: failed to reattach to logmon process: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats error="Reattachment process not found"
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats path=/usr/local/bin/nomad pid=2504
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats path=/usr/local/bin/nomad args=["/usr/local/bin/nomad", "logmon"]
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats path=/usr/local/bin/nomad
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter path=/usr/local/bin/nomad args=["/usr/local/bin/nomad", "logmon"]
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โš ] client.alloc_runner.task_runner.task_hook: failed to reattach to logmon process: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter error="Reattachment process not found"
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter path=/usr/local/bin/nomad pid=2510
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter path=/usr/local/bin/nomad
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats @module=logmon address=/tmp/plugin2006176162 network=unix timestamp=2023-03-30T20:58:10.791Z
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats version=2
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter version=2
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter address=/tmp/plugin1562593499 network=unix @module=logmon timestamp=2023-03-30T20:58:10.802Z
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats path=/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/alloc/logs/.nats.stdout.fifo @module=logmon timestamp=2023-03-30T20:58:10.806Z
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter @module=logmon path=/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/alloc/logs/.nats-prometheus-exporter.stdout.fifo timestamp=2023-03-30T20:58:10.805Z
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats @module=logmon path=/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/alloc/logs/.nats.stderr.fifo timestamp=2023-03-30T20:58:10.806Z
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter @module=logmon path=/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/alloc/logs/.nats-prometheus-exporter.stderr.fifo timestamp=2023-03-30T20:58:10.805Z
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โš ] client.alloc_runner.task_runner.task_hook.api: error creating task api socket: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter path=/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats-prometheus-exporter/secrets/api.sock error="listen unix /opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats-prometheus-exporter/secrets/api.sock: bind: invalid argument"
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] agent: (runner) final config: {"Consul":{"Address":"127.0.0.1:8501","Namespace":"","Auth":{"Enabled":false,"Username":""},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"/usr/local/share/ca-certificates/cloudlocal/cluster-ca-bundle.pem","CaPath":"","Cert":"/etc/opt/certs/consul/consul.pem","Enabled":true,"Key":"/etc/opt/certs/consul/consul-key.pem","ServerName":"","Verify":true},"Token":"","TokenFile":"","Transport":{"CustomDialer":null,"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":9,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000,"BlockQueryWaitTime":60000000000},"DefaultDelims":{"Left":null,"Right":null},"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","FileLog":{"LogFilePath":"","LogRotateBytes":0,"LogRotateDuration":86400000000000,"LogRotateMaxFiles":0},"MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0","Name":"consul-template"},"Templates":[{"Backup":false,"Command":[],"CommandTimeout":30000000000,"Contents":"# Client port of ++ env \"NOMAD_PORT_client\" ++ on all interfaces\nport: ++ env \"NOMAD_PORT_client\" ++\n\n# HTTP monitoring port\nmonitor_port: ++ env \"NOMAD_PORT_http\" ++\nserver_name: \"++ env \"NOMAD_ALLOC_NAME\" ++\"\n#If true enable protocol trace log messages. Excludes the system account.\ntrace: false\n#If true enable protocol trace log messages. Includes the system account.\ntrace_verbose: false\n#if true enable debug log messages\ndebug: false\nhttp_port: ++ env \"NOMAD_PORT_http\" ++\n#http: nats.service.consul:++ env \"NOMAD_PORT_http\" ++\n\njetstream {\n store_dir: /data/jetstream\n\n # 1GB\n max_memory_store: 2G\n\n # 10GB\n max_file_store: 10G\n}\n","CreateDestDirs":true,"Destination":"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats/local/nats.conf","ErrMissingKey":false,"ErrFatal":true,"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"User":null,"Uid":null,"Group":null,"Gid":null,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"++","RightDelim":"++","FunctionDenylist":["plugin","writeToFile"],"SandboxPath":"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats"}],"TemplateErrFatal":null,"Vault":{"Address":"","Enabled":false,"Namespace":"","RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":true,"Key":"","ServerName":"","Verify":true},"Transport":{"CustomDialer":null,"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":9,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false,"DefaultLeaseDuration":300000000000,"LeaseRenewalThreshold":0.9,"K8SAuthRoleName":"","K8SServiceAccountTokenPath":"/run/secrets/kubernetes.io/serviceaccount/token","K8SServiceAccountToken":"","K8SServiceMountPath":"kubernetes"},"Nomad":{"Address":"","Enabled":true,"Namespace":"default","SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"AuthUsername":"","AuthPassword":"","Transport":{"CustomDialer":{},"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":9,"TLSHandshakeTimeout":10000000000},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true}},"Wait":{"Enabled":false,"Min":0,"Max":0},"Once":false,"ParseOnly":false,"BlockQueryWaitTime":60000000000}
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] agent: (runner) rendering "(dynamic)" => "/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats/local/nats.conf"
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type=Terminated msg="Exit Code: 1, Exit Message: \"Docker container exited with non-zero exit code: 1\"" failed=false
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.stats_hook: failed to start stats collection for task with unrecoverable error: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats error="container stopped"
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type=Terminated msg="Exit Code: 2, Exit Message: \"Docker container exited with non-zero exit code: 2\"" failed=false
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.stats_hook: failed to start stats collection for task with unrecoverable error: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter error="container stopped"
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: restarting task: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats reason="Restart within policy" delay=5.703643048s
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type=Restarting msg="Task restarting in 5.703643048s" failed=false
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: restarting task: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter reason="Restart within policy" delay=5.703643048s
2023-03-30T22:58:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type=Restarting msg="Task restarting in 5.703643048s" failed=false
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [โš ] client.alloc_runner.task_runner.task_hook.api: error creating task api socket: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter path=/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats-prometheus-exporter/secrets/api.sock error="listen unix /opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats-prometheus-exporter/secrets/api.sock: bind: invalid argument"
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: binding directories: driver=docker task_name=nats-prometheus-exporter binds="[]string{\"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/alloc:/alloc\", \"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats-prometheus-exporter/local:/local\", \"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats-prometheus-exporter/secrets:/secrets\"}"
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=nats-prometheus-exporter labels="map[com.github.logunifier.application.name:prometheus-nats-exporter com.github.logunifier.application.pattern.key:tslevelmsg com.github.logunifier.application.version:0.10.1.0 com.hashicorp.nomad.alloc_id:df9c5c6e-a682-9014-f2b6-c4dad387b330 com.hashicorp.nomad.job_id:observability com.hashicorp.nomad.job_name:observability com.hashicorp.nomad.namespace:default com.hashicorp.nomad.node_id:0b854fe8-fa1a-1ec2-def2-914f1fae8dd7 com.hashicorp.nomad.node_name:worker-01 com.hashicorp.nomad.task_group_name:nats com.hashicorp.nomad.task_name:nats-prometheus-exporter]"
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: setting container name: driver=docker task_name=nats-prometheus-exporter container_name=nats-prometheus-exporter-df9c5c6e-a682-9014-f2b6-c4dad387b330
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=nats labels="map[com.github.logunifier.application.name:nats com.github.logunifier.application.pattern.key:tslevelmsg com.github.logunifier.application.version:2.9.15 com.hashicorp.nomad.alloc_id:df9c5c6e-a682-9014-f2b6-c4dad387b330 com.hashicorp.nomad.job_id:observability com.hashicorp.nomad.job_name:observability com.hashicorp.nomad.namespace:default com.hashicorp.nomad.node_id:0b854fe8-fa1a-1ec2-def2-914f1fae8dd7 com.hashicorp.nomad.node_name:worker-01 com.hashicorp.nomad.task_group_name:nats com.hashicorp.nomad.task_name:nats]"
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: binding directories: driver=docker task_name=nats binds="[]string{\"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/alloc:/alloc\", \"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats/local:/local\", \"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats/secrets:/secrets\", \"/opt/services/core/nomad/data/alloc/df9c5c6e-a682-9014-f2b6-c4dad387b330/nats/local/nats.conf:/config/nats.conf\"}"
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: setting container name: driver=docker task_name=nats container_name=nats-df9c5c6e-a682-9014-f2b6-c4dad387b330
2023-03-30T22:58:16+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type=Started msg="Task started by client" failed=false
2023-03-30T22:58:17+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type=Started msg="Task started by client" failed=false
2023-03-30T22:59:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T22:59:10+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type="Restart Signaled" msg="healthcheck: check \"service: \"nats-prometheus-exporter\" check\" unhealthy" failed=false
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type="Restart Signaled" msg="healthcheck: check \"service: \"nats-prometheus-exporter\" check\" unhealthy" failed=false
2023-03-30T22:59:20+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Deregistered service: service=_nomad-task-df9c5c6e-a682-9014-f2b6-c4dad387b330-group-nats-nats-client
2023-03-30T22:59:20+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Deregistered service: service=_nomad-task-df9c5c6e-a682-9014-f2b6-c4dad387b330-group-nats-nats-prometheus-exporter-prometheus-exporter
2023-03-30T22:59:20+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-df9c5c6e-a682-9014-f2b6-c4dad387b330-group-nats-nats-prometheus-exporter-prometheus-exporter
2023-03-30T22:59:20+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-df9c5c6e-a682-9014-f2b6-c4dad387b330-group-nats-nats-client
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type=Terminated msg="Exit Code: 1, Exit Message: \"Docker container exited with non-zero exit code: 1\"" failed=false
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: not restarting task: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats reason="Exceeded allowed attempts 1 in interval 1h0m0s and mode is \"fail\""
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type="Not Restarting" msg="Exceeded allowed attempts 1 in interval 1h0m0s and mode is \"fail\"" failed=true
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner: task failure, destroying all tasks: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 failed_task=nats
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type="Sibling Task Failed" msg="Task's sibling \"nats\" failed" failed=false
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type=Killing msg="Sent interrupt. Waiting 5s before force killing" failed=false
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.stdio: received EOF, stopping recv loop: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner: task run loop exiting: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type=Terminated msg="Exit Code: 2, Exit Message: \"Docker container exited with non-zero exit code: 2\"" failed=false
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats path=/usr/local/bin/nomad pid=2504
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type=Killing msg="Sent interrupt. Waiting 5s before force killing" failed=false
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type="Not Restarting" msg="Exceeded allowed attempts 1 in interval 1h0m0s and mode is \"fail\"" failed=true
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: not restarting task: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter reason="Exceeded allowed attempts 1 in interval 1h0m0s and mode is \"fail\""
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.stdio: received EOF, stopping recv loop: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter path=/usr/local/bin/nomad pid=2510
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.gc: marking allocation for GC: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner: waiting for task to exit: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner: task run loop exiting: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter
2023-03-30T22:59:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter
2023-03-30T22:59:30+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter type=Received msg="Task received by client" failed=false
2023-03-30T22:59:30+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats type=Received msg="Task received by client" failed=false
2023-03-30T22:59:30+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_migrator: waiting for previous alloc to terminate: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 previous_alloc=df9c5c6e-a682-9014-f2b6-c4dad387b330
2023-03-30T22:59:30+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_migrator: waiting for previous alloc to terminate: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 previous_alloc=df9c5c6e-a682-9014-f2b6-c4dad387b330
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.runner_hook: received result from CNI: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 result="{\"Interfaces\":{\"eth0\":{\"IPConfigs\":[{\"IP\":\"172.26.68.183\",\"Gateway\":\"172.26.64.1\"}],\"Mac\":\"46:c3:bf:c3:47:9b\",\"Sandbox\":\"/var/run/docker/netns/6767bcfbe63b\"},\"nomad\":{\"IPConfigs\":null,\"Mac\":\"d2:dd:3d:8b:8d:9a\",\"Sandbox\":\"\"},\"veth65e321a2\":{\"IPConfigs\":null,\"Mac\":\"ae:ad:fa:6e:36:22\",\"Sandbox\":\"\"}},\"DNS\":[{}],\"Routes\":[{\"dst\":\"0.0.0.0/0\"}]}"
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats type="Task Setup" msg="Building Task Directory" failed=false
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats path=/usr/local/bin/nomad args=["/usr/local/bin/nomad", "logmon"]
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats path=/usr/local/bin/nomad pid=7306
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats path=/usr/local/bin/nomad
2023-03-30T22:59:31+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-0cafa163-5810-a78e-28dc-e081ebfcded6-group-nats-nats-client
2023-03-30T22:59:31+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-0cafa163-5810-a78e-28dc-e081ebfcded6-group-nats-nats-prometheus-exporter-prometheus-exporter
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats version=2
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats @module=logmon address=/tmp/plugin3215782575 network=unix timestamp=2023-03-30T20:59:31.457Z
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats @module=logmon path=/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/alloc/logs/.nats.stdout.fifo timestamp=2023-03-30T20:59:31.461Z
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats @module=logmon path=/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/alloc/logs/.nats.stderr.fifo timestamp=2023-03-30T20:59:31.461Z
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] agent: (runner) final config: {"Consul":{"Address":"127.0.0.1:8501","Namespace":"","Auth":{"Enabled":false,"Username":""},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"/usr/local/share/ca-certificates/cloudlocal/cluster-ca-bundle.pem","CaPath":"","Cert":"/etc/opt/certs/consul/consul.pem","Enabled":true,"Key":"/etc/opt/certs/consul/consul-key.pem","ServerName":"","Verify":true},"Token":"","TokenFile":"","Transport":{"CustomDialer":null,"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":9,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000,"BlockQueryWaitTime":60000000000},"DefaultDelims":{"Left":null,"Right":null},"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","FileLog":{"LogFilePath":"","LogRotateBytes":0,"LogRotateDuration":86400000000000,"LogRotateMaxFiles":0},"MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0","Name":"consul-template"},"Templates":[{"Backup":false,"Command":[],"CommandTimeout":30000000000,"Contents":"# Client port of ++ env \"NOMAD_PORT_client\" ++ on all interfaces\nport: ++ env \"NOMAD_PORT_client\" ++\n\n# HTTP monitoring port\nmonitor_port: ++ env \"NOMAD_PORT_http\" ++\nserver_name: \"++ env \"NOMAD_ALLOC_NAME\" ++\"\n#If true enable protocol trace log messages. Excludes the system account.\ntrace: false\n#If true enable protocol trace log messages. Includes the system account.\ntrace_verbose: false\n#if true enable debug log messages\ndebug: false\nhttp_port: ++ env \"NOMAD_PORT_http\" ++\n#http: nats.service.consul:++ env \"NOMAD_PORT_http\" ++\n\njetstream {\n store_dir: /data/jetstream\n\n # 1GB\n max_memory_store: 2G\n\n # 10GB\n max_file_store: 10G\n}\n","CreateDestDirs":true,"Destination":"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats/local/nats.conf","ErrMissingKey":false,"ErrFatal":true,"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"User":null,"Uid":null,"Group":null,"Gid":null,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"++","RightDelim":"++","FunctionDenylist":["plugin","writeToFile"],"SandboxPath":"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats"}],"TemplateErrFatal":null,"Vault":{"Address":"","Enabled":false,"Namespace":"","RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":true,"Key":"","ServerName":"","Verify":true},"Transport":{"CustomDialer":null,"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":9,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false,"DefaultLeaseDuration":300000000000,"LeaseRenewalThreshold":0.9,"K8SAuthRoleName":"","K8SServiceAccountTokenPath":"/run/secrets/kubernetes.io/serviceaccount/token","K8SServiceAccountToken":"","K8SServiceMountPath":"kubernetes"},"Nomad":{"Address":"","Enabled":true,"Namespace":"default","SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"AuthUsername":"","AuthPassword":"","Transport":{"CustomDialer":{},"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":9,"TLSHandshakeTimeout":10000000000},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true}},"Wait":{"Enabled":false,"Min":0,"Max":0},"Once":false,"ParseOnly":false,"BlockQueryWaitTime":60000000000}
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] agent: (runner) rendering "(dynamic)" => "/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats/local/nats.conf"
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] agent: (runner) rendered "(dynamic)" => "/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats/local/nats.conf"
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: binding directories: driver=docker task_name=nats binds="[]string{\"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/alloc:/alloc\", \"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats/local:/local\", \"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats/secrets:/secrets\", \"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats/local/nats.conf:/config/nats.conf\"}"
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: setting container name: driver=docker task_name=nats container_name=nats-0cafa163-5810-a78e-28dc-e081ebfcded6
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=nats labels="map[com.github.logunifier.application.name:nats com.github.logunifier.application.pattern.key:tslevelmsg com.github.logunifier.application.version:2.9.15 com.hashicorp.nomad.alloc_id:0cafa163-5810-a78e-28dc-e081ebfcded6 com.hashicorp.nomad.job_id:observability com.hashicorp.nomad.job_name:observability com.hashicorp.nomad.namespace:default com.hashicorp.nomad.node_id:0b854fe8-fa1a-1ec2-def2-914f1fae8dd7 com.hashicorp.nomad.node_name:worker-01 com.hashicorp.nomad.task_group_name:nats com.hashicorp.nomad.task_name:nats]"
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats type=Started msg="Task started by client" failed=false
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter type="Task Setup" msg="Building Task Directory" failed=false
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter path=/usr/local/bin/nomad args=["/usr/local/bin/nomad", "logmon"]
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter path=/usr/local/bin/nomad
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter path=/usr/local/bin/nomad pid=7376
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter version=2
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter @module=logmon address=/tmp/plugin3396555438 network=unix timestamp=2023-03-30T20:59:31.734Z
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter @module=logmon path=/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/alloc/logs/.nats-prometheus-exporter.stderr.fifo timestamp=2023-03-30T20:59:31.738Z
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter @module=logmon path=/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/alloc/logs/.nats-prometheus-exporter.stdout.fifo timestamp=2023-03-30T20:59:31.738Z
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [โš ] client.alloc_runner.task_runner.task_hook.api: error creating task api socket: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter path=/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats-prometheus-exporter/secrets/api.sock error="listen unix /opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats-prometheus-exporter/secrets/api.sock: bind: invalid argument"
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=nats-prometheus-exporter labels="map[com.github.logunifier.application.name:prometheus-nats-exporter com.github.logunifier.application.pattern.key:tslevelmsg com.github.logunifier.application.version:0.10.1.0 com.hashicorp.nomad.alloc_id:0cafa163-5810-a78e-28dc-e081ebfcded6 com.hashicorp.nomad.job_id:observability com.hashicorp.nomad.job_name:observability com.hashicorp.nomad.namespace:default com.hashicorp.nomad.node_id:0b854fe8-fa1a-1ec2-def2-914f1fae8dd7 com.hashicorp.nomad.node_name:worker-01 com.hashicorp.nomad.task_group_name:nats com.hashicorp.nomad.task_name:nats-prometheus-exporter]"
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: binding directories: driver=docker task_name=nats-prometheus-exporter binds="[]string{\"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/alloc:/alloc\", \"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats-prometheus-exporter/local:/local\", \"/opt/services/core/nomad/data/alloc/0cafa163-5810-a78e-28dc-e081ebfcded6/nats-prometheus-exporter/secrets:/secrets\"}"
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.driver_mgr.docker: setting container name: driver=docker task_name=nats-prometheus-exporter container_name=nats-prometheus-exporter-0cafa163-5810-a78e-28dc-e081ebfcded6
2023-03-30T22:59:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 task=nats-prometheus-exporter type=Started msg="Task started by client" failed=false
2023-03-30T23:00:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:00:20+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:00:30+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:01:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:01:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:01:41+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:02:41+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:02:41+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:02:51+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:03:51+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:03:51+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:04:01+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:05:01+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:05:01+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:05:11+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:06:11+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:06:11+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:06:21+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:07:22+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:07:22+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:07:29+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.gc: garbage collecting allocation: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 reason="forced collection"
2023-03-30T23:07:29+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats type=Killing msg="Sent interrupt. Waiting 5s before force killing" failed=false
2023-03-30T23:07:29+02:00 [nomad.service ๐Ÿ’ป worker-01] [โœ…] client.alloc_runner.task_runner: Task event: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 task=nats-prometheus-exporter type=Killing msg="Sent interrupt. Waiting 5s before force killing" failed=false
2023-03-30T23:07:29+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] client.gc: alloc garbage collected: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330
2023-03-30T23:07:32+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:08:32+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:08:32+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:08:42+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:09:42+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:09:42+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:09:52+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:10:52+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:10:52+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:11:02+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:12:02+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:12:02+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:12:12+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:13:13+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:13:13+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:13:23+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:14:23+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:14:23+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:14:33+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:15:33+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:15:33+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:15:43+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:16:43+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:16:43+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:16:53+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:17:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:17:31+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:17:32+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-df9c5c6e-a682-9014-f2b6-c4dad387b330-group-nats-nats-client
2023-03-30T23:17:32+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-0cafa163-5810-a78e-28dc-e081ebfcded6-group-nats-nats-client
2023-03-30T23:17:32+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-0cafa163-5810-a78e-28dc-e081ebfcded6-group-nats-nats-prometheus-exporter-prometheus-exporter
2023-03-30T23:17:32+02:00 [consul.service ๐Ÿ’ป worker-01] [โœ…] agent: Synced service: service=_nomad-task-df9c5c6e-a682-9014-f2b6-c4dad387b330-group-nats-nats-prometheus-exporter-prometheus-exporter
2023-03-30T23:17:33+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: canceling restart because check became healthy: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 check="service: \"nats\" check" task=group-nats
2023-03-30T23:17:33+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: canceling restart because check became healthy: alloc_id=0cafa163-5810-a78e-28dc-e081ebfcded6 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:17:53+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:17:53+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:18:03+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:19:04+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:19:04+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:19:14+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:19:22+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/client/allocation/0cafa163-5810-a78e-28dc-e081ebfcded6/stats duration=1.658772ms
2023-03-30T23:19:25+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/client/allocation/0cafa163-5810-a78e-28dc-e081ebfcded6/stats duration=1.48332ms
2023-03-30T23:19:27+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/client/allocation/0cafa163-5810-a78e-28dc-e081ebfcded6/stats duration=1.209809ms
2023-03-30T23:19:27+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/client/allocation/0cafa163-5810-a78e-28dc-e081ebfcded6/stats duration=1.696273ms
2023-03-30T23:19:29+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/client/allocation/0cafa163-5810-a78e-28dc-e081ebfcded6/stats duration=2.283916ms
2023-03-30T23:19:29+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/client/allocation/0cafa163-5810-a78e-28dc-e081ebfcded6/stats duration=9.126446ms
2023-03-30T23:19:32+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/client/allocation/0cafa163-5810-a78e-28dc-e081ebfcded6/stats duration=1.157108ms
2023-03-30T23:19:32+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/client/allocation/0cafa163-5810-a78e-28dc-e081ebfcded6/stats duration=1.302646ms
2023-03-30T23:20:14+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:20:14+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:20:24+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:21:24+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:21:24+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:21:34+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:22:34+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:22:34+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:22:44+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:23:44+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats\" check" task=group-nats time_limit=20s
2023-03-30T23:23:44+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: check became unhealthy. Will restart if check doesn't become healthy: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats time_limit=10s
2023-03-30T23:23:55+02:00 [nomad.service ๐Ÿ’ป worker-01] [๐Ÿž] watch.checks: restarting due to unhealthy check: alloc_id=df9c5c6e-a682-9014-f2b6-c4dad387b330 check="service: \"nats-prometheus-exporter\" check" task=group-nats
2023-03-30T23:24:05+02:00 [nomad.service ๐Ÿ’ป master-01] [๐Ÿž] http: request complete: method=GET path=/v1/allocations?prefix=df9c5c6e-a682-9014-f2b6-c4dad387b330 duration="172.779ยตs"

tgross commented 1 year ago

Hi @suikast42! This is a known issue and is being tracked in https://github.com/hashicorp/nomad/issues/16616. I'm going to close this issue as a duplicate and link to it from #16616 so the folks working on that are aware of your logs, just in case that helps. Thanks for reporting this!