hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.63k stars 1.93k forks source link

Consul service deregistration gets "ACL not found" w/ Workload Identities #23494

Open sijo13 opened 3 weeks ago

sijo13 commented 3 weeks ago

Product Version

OS : CentOS Linux release 7.9.2009 Nomad Version : v1.7.3 Consul Version: v1.17.3

Issue Description

We are getting :"Unexpected response code: 403 (ACL not found)" error in nomad logs and due to which services in consul is not getting registered/unregistered unless nomad service is restarted. In our Nomad cluster we have around 100+ Jobs running and this issue is observed intermittently on one/two Job. Every time its a different Job that gets Impacted.

we are using the JWT mechanism outlined in Consul ACL with Nomad Workload Identities | Nomad | HashiCorp Developer to authenticate Nomad workloads against Consul.

ACL not found Issue is observed only in Test and UAT Environment where we have enabled WI to integrate with Consul.

We verified the Token used for service check from /opt/consul/checks(consul data dir) and found out that token used in check is no longer available in the consul.

And the issue gets fixed only after restart the nomad service.

Reproduction Steps

Tried to reproduce the issue in local dev environment , However in local the service registration and de-registration is happening as expected.

Nomad server and client config snippet

Nomad client:

consul {
  address = "<address>"
  token = "<token>"
}

Nomad Server:

consul {
  address = "<Address>"
  token = "<token>"
service_identity {
  aud = ["consul.io"]
  ttl = "1h"
}

task_identity {
  aud = ["consul.io"]
  ttl = "1h"
}
}

Nomad log snippet:

":"Task started by client","task":"testing","type":"Started"}
{"@level":"info","@message":"Task event","@module":"client.alloc_runner.task_runner","@timestamp":"2024-07-01T16:01:15.745096+01:00","alloc_id":"b060c3fb-c027-300f-305f-02e29b9cc055","failed":false,"msg
":"Exit Code: 1, Exit Message: \"Docker container exited with non-zero exit code: 1\"","task":"testing","type":"Terminated"}
{"@level":"info","@message":"plugin process exited","@module":"client.driver_mgr.docker.docker_logger","@timestamp":"2024-07-01T16:01:15.748890+01:00","driver":"docker","id":"99053","plugin":"/usr/bin/n
omad"}
{"@level":"info","@message":"not restarting task","@module":"client.alloc_runner.task_runner","@timestamp":"2024-07-01T16:01:15.755206+01:00","alloc_id":"b060c3fb-c027-300f-305f-02e29b9cc055","reason":"
Exceeded allowed attempts 2 in interval 30m0s and mode is \"fail\"","task":"testing"}
{"@level":"info","@message":"Task event","@module":"client.alloc_runner.task_runner","@timestamp":"2024-07-01T16:01:15.755250+01:00","alloc_id":"b060c3fb-c027-300f-305f-02e29b9cc055","failed":true,"msg"
:"Exceeded allowed attempts 2 in interval 30m0s and mode is \"fail\"","task":"testing","type":"Not Restarting"}
{"@level":"info","@message":"plugin process exited","@module":"client.alloc_runner.task_runner.task_hook.logmon","@timestamp":"2024-07-01T16:01:15.765191+01:00","alloc_id":"b060c3fb-c027-300f-305f-02e29
b9cc055","id":"98186","plugin":"/usr/bin/nomad","task":"testing"}
{"@level":"info","@message":"(runner) stopping","@module":"agent","@timestamp":"2024-07-01T16:01:15.766256+01:00"}
{"@level":"info","@message":"(runner) received finish","@module":"agent","@timestamp":"2024-07-01T16:01:15.768239+01:00"}
{"@level":"info","@message":"marking allocation for GC","@module":"client.gc","@timestamp":"2024-07-01T16:01:15.768265+01:00","alloc_id":"b060c3fb-c027-300f-305f-02e29b9cc055"}
{"@level":"info","@message":"Task event","@module":"client.alloc_runner.task_runner","@timestamp":"2024-07-01T16:01:15.770237+01:00","alloc_id":"b060c3fb-c027-300f-305f-02e29b9cc055","failed":false,"msg
":"Unhealthy because of failed task","task":"testing","type":"Alloc Unhealthy"}
{"@level":"warn","@message":"failed to update services in Consul","@module":"consul.sync","@timestamp":"2024-07-01T16:01:15.796522+01:00","error":"Unexpected response code: 403 (ACL not found)"}
{"@level":"warn","@message":"unable to fingerprint consul","@module":"client.fingerprint_mgr.consul","@timestamp":"2024-07-01T16:01:56.816331+01:00","attribute":"consul.partition","cluster":"default"}
{"@level":"error","@message":"still unable to update services in Consul","@module":"consul.sync","@timestamp":"2024-07-01T16:02:10.829798+01:00","error":"Unexpected response code: 403 (ACL not found)","failures":10}
ngcmac commented 3 weeks ago

Hi, i believe this can be the same issue i reported here https://github.com/hashicorp/nomad/issues/16616#issuecomment-2209381751.

Thanks.

sijo13 commented 3 weeks ago

@ngcmac, Yes , Its the same behaviour we see in our Nomad infrastructure too.

Job gets restarted because of connectivity issue with vault/consul and it fails to de-register service in consul because of token issue.

However , Issue occurrence is intermediate and couldn't replicate the same behaviour in local dev mode setup .

tgross commented 3 weeks ago

Thanks @sijo13 and @ngcmac. I'll mark this for further investigation.