hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.6k stars 1.92k forks source link

Reloading consul certs #17297

Open juananinca opened 1 year ago

juananinca commented 1 year ago

Nomad version

Nomad v1.5.0 BuildDate 2023-03-01T10:11:42Z Revision fc40c491cacec3d8ec3f2f98cd82b9068a50797c

Operating system and Environment details

NAME="Oracle Linux Server" VERSION="8.7"

Issue

I setup a Nomad cluster with Consul consisting in a few clients and one single server. Both Nomad and Consul are secured using mutual tls generated by Vault's PKI secret engine and rotating them with a TTL of 1h using consul-template in each node, just like in this tutorial https://developer.hashicorp.com/nomad/tutorials/integrate-vault/vault-pki-nomad, (the vault service is not running within this cluster). After every rotation consul-template sends a SIGHUP to both nomad and consul through a systemctl reload.

While I was testing the cluster I found out that one of the clients (let's call it CA) was unable to register any service into consul altough if I tried to run the same job in another client (let's call it CB) there was no problem registering it. Besides I noticied in the nomad's log from CA that it was unable to get Consul's checks: {"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-23T00:52:25.791875+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}

After restarting nomad in CA, all jobs run in that client are registered into Consul and the bad certificate error trying to get Consul checks is gone until the expiration time comes and then I am at the starting poing once again, unable to register new services from CA and bad certificates errors in the nomad logs. What is weird is that CB keeps registering and communicating with consul even after certs are rotated without any restart of the nomad service.

I headed for the documentation (https://developer.hashicorp.com/nomad/docs/configuration#configuration-reload) and it made sense to me (kind of).

tls: note this only reloads the TLS configuration between Nomad agents (servers and clients), and not the TLS configuration for communication with Consul or Vault.

This perfectly explains the CA's behaviour regarding the Consul communication, but not the CB's.

Who's right and who's wrong? Is CA acting as it is supposed to? And what about CB?

Note: I double checked the certs expiration time from CB, by copying them (thus they aren't rotated) at the moment I restart nomad which they are valid and waiting until they are expired. Once they are expired I use them to curl the consul check for instance https://localhost:8500/v1/agent/checks and I get an bad cert error, but nomad is still requesting them without any error and no need to restart the service just sending a SIGHUP signal by systemctl reload nomad.

Reproduction steps

Expected Result

Not sure

Actual Result

Clients behaving different under same conditions

Job file (if appropriate)

Nomad Server logs (if appropriate)

Nomad Client logs (if appropriate)

{"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:10.809608+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"} {"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:12.837512+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"} {"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:14.866487+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"} {"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:18.923713+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"} {"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:20.953657+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"} {"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:22.983925+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"} {"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:25.010972+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}

lgfa29 commented 1 year ago

Hi @juananinca 👋

I believe CA's behaviour is the expected one in this case. This is the code for the Nomad agent configuration reload: https://github.com/hashicorp/nomad/blob/da9ec8ce1eb535bad83e8b9b9ea68e2c1886a8ee/command/agent/agent.go#L1286-L1355

As mentioned in the docs, only Nomad's own TLS configuration is reloaded.

Are CA and CB configuration identical? Both for Nomad and Consul? I could imagine this happening if one of the agents is configure to ignore TLS certs.

juananinca commented 10 months ago

Sorry for the delay.

Yes, both configs are the same. Here you have the consul config, the only difference between the clients are the node_name and the advertise_addr:

  "disable_update_check": false,
  "bootstrap": false,
  "server": false,
  "node_name": "NODE_NAME",
  "datacenter": "DATACENTER_NAME",
  "data_dir": "/opt/consul/data",
  "encrypt": "aaaaaaaaaaaaaaaa==",
  "disable_update_check": true,
  "bind_addr": "0.0.0.0",
  "advertise_addr": "10.10.10.10",
  "addresses": {
    "https": "0.0.0.0",
    "dns": "0.0.0.0"
  },
  "ports": {
    "https": 8500,
    "http": -1
  },
  "key_file": "/opt/consul/ssl/server-key.pem",
  "cert_file": "/opt/consul/ssl/server.pem",
  "ca_file": "/opt/consul/ssl/consul-ca.pem",
  "verify_incoming": true,
  "verify_outgoing": true,
  "retry_join": [
    "11.11.11.11"
  ],
  "log_file": "/var/log/consul/",
  "log_json": true,
  "log_rotate_max_files": 7,
  "limits": {
    "https_handshake_timeout": "10s",
    "http_max_conns_per_client": 1000,
    "rpc_handshake_timeout": "10s",
    "rpc_max_conns_per_client": 1000
  },
  "connect": {
    "enabled": true
  },
  "acl": {
    "enabled": true,
    "default_policy": "deny",
    "enable_token_persistence": true,
    "tokens": {
      "agent": "aaaaaaa-bbbb-cccc-ddddd-eeeeeeeeee"
    }
  }
}

And the nomad config, in this case the only difference are the name and limits.* ip's:

name = "CLIENT_NAME"
log_level = "WARN"
leave_on_interrupt = true
leave_on_terminate = true
data_dir = "/var/nomad/data"
bind_addr = "0.0.0.0"
disable_update_check = true
limits {
        https_handshake_timeout   = "10s"
        http_max_conns_per_client = 400
        rpc_handshake_timeout     = "10s"
        rpc_max_conns_per_client  = 400
}
advertise {
    http = "10.10.10.10:4646"
    rpc = "10.10.10.10:4647"
    serf = "10.10.10.10:4648"
}
tls {
  http = true
  rpc  = true
  cert_file = "/opt/nomad/ssl/server.pem"
  key_file = "/opt/nomad/ssl/server-key.pem"
  ca_file = "/opt/nomad/ssl/nomad-ca.pem"
  verify_server_hostname = true
  verify_https_client    = true

}
log_file = "/var/log/nomad/"
log_json = true
log_rotate_max_files = 7
consul {
    address = "127.0.0.1:8500"
    server_service_name = "nomad-server"
    client_service_name = "nomad-client"
    auto_advertise = true
    server_auto_join = true
    client_auto_join = true

    ssl = true
    ca_file = "/opt/consul/ssl/consul-ca.pem"
    cert_file = "/opt/consul/ssl/server.pem"
    key_file = "/opt/consul/ssl/server-key.pem"
        token = "aaaaaa-79bbbbb74-cccc-dddddd-eeeeeeee"

}
acl {
  enabled = true
}

vault {
    enabled = true
    address = "https://my.vault.addr:8200/"
    ca_file = "/opt/vault/ssl/vault-ca.pem"
    cert_file = "/opt/vault/ssl/client-vault.pem"
    key_file = "/opt/vault/ssl/client-vault-key.pem"
}
telemetry {
  publish_allocation_metrics = true
  publish_node_metrics       = true
}

As you can see the verify_incoming config is set to true in the consul configuration file.