Closed picatz closed 3 years ago
🤔 Another way to approach this problem / layer of defense could also be to use a firewall rule on the Nomad client instances to prevent access to the metadata service.
Run a Nomad job.
$ nomad run jobs/count-dashboard.hcl
SSH into a Nomad client, and use tcpdump
to listen on the 169.254.169.254/32
range.
$ gcloud compute ssh client-0 --tunnel-through-iap
$ sudo tcpdump net 169.254.169.254/32
...
In another terminal, exec
into a dashboard
allocation from the countdash
job we started:
$ export NOMAD_TOKEN="..."
$ export NOMAD_ALLOC="$(nomad status countdash | grep "dashboard" | tail -n 1 | awk '{print $1}')"
$ nomad alloc exec $NOMAD_ALLOC
$ curl -H "Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/attributes/startup-script
...
Then, back on the Nomad client terminal, you can see the traffic from tcpdump
.
To block that metadata traffic, you can use the following iptables rulel, remember to save/persist this rule on startup:
$ iptables --insert FORWARD 1 --in-interface nomad --destination 169.254.169.254/32 --jump DROP
Back in the Nomad alloc exec terminal, the traffic will no longer be allowed.
$ curl -H "Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/attributes/startup-script
...
As of
v2.0.0
/#18 , this module deploys a Consul cluster in tandem with the Nomad cluster. Moreover, it uses the metadata service to perform the majority of the dynamic server configuration, and exposes many secrets to malicious/compromised workloads on Nomad client instances.All secrets should be removed from the metadata service, or at least not stored in plaintext.