Open DejfCold opened 3 years ago
Thank you for the report @DejfCold
I was able to reproduce the issue as you described, and it seems to only happen when using Consul Connect (commenting out the connect
block from your service
made /etc/hosts
look right) so I've updated the title to highlight this.
I also encountered the problem in our cluster (using Nomad 1.1.3 & Consul 1.10.1). As for the workaround for now I created dnsmasq service with the extra_host parameters
job "dnsmasq" {
datacenters = ["dc1"]
type = "service"
group "dns" {
count = 1
network {
mode = "bridge"
port "dns" {
static = 53
to = 53
}
}
service {
name = "dnsmasq"
port = "53"
}
task "dnsmasq" {
driver = "docker"
config {
entrypoint = ["dnsmasq", "-d"]
image = "strm/dnsmasq:latest"
volumes = [
"local/dnsmasq.conf:/config/dnsmasq.conf"
]
extra_hosts = [
"my.extra.domain:127.0.0.1",
"my2.extra.domain:127.0.0.1"
]
}
template {
data = <<EOF
#log all dns queries
log-queries
#dont use hosts nameservers
no-resolv
EOF
destination = "local/dnsmasq.conf"
}
}
}
}
and in the job, which needs to extra_hosts
parameter, I removed the extra_hosts
and included the dns configuration in network stanza:
network {
mode = "bridge"
dns {
servers = ["<HOST_IP>:53"]
}
}
@ollivainola are you using Consul Connect?
@lgfa29 yes. The task where where I used the extra_hosts
uses Consul Connect. I was previously using Nomad 1.1.2 and Consul 1.10.0 where extra_hosts
was working just fine with Consul Connect. For me extra_hosts
stopped working after I upgraded the cluster to the newer versions.
Thanks for the extra info @ollivainola.
I've confirmed that https://github.com/hashicorp/nomad/pull/10823 broke extra_hosts
with Consul Connect because the /etc/hosts
file is now being shared with all tasks in the alloc. Since the Connect sidecar doesn't have any extra_hosts
, it will generate an /etc/hosts
without them.
~@DejfCold I thought I had a quick fix for this, but this problem is actually more tricky than it looks.~
~As a workaround, you could leverage the fact that /etc/hosts
is now shared between tasks in the same alloc and manually add entries in prestart
task. So something like this:~
job "countdash" {
# ...
group "dashboard" {
# ...
task "extra-hosts" {
driver = "docker"
config {
image = "busybox:1.33"
command = "/bin/sh"
args = ["local/extra_hosts.sh"]
}
template {
data = <<EOF
cat <<EOT >> /etc/hosts
127.0.0.1 freeipa.ingress.dc1.consul
EOT
EOF
destination = "local/extra_hosts.sh"
}
lifecycle {
hook = "prestart"
}
}
}
}
~It's not great, but hopefully it will work for now.~
~It's also worth pointing out that this workaround suffers from the same issue that my naive fix, which is a possible race condition between tasks trying to update /etc/hosts
in parallel. Maybe a sleep
in the script could help, or a loop that blocks until the Connect sidecar proxy is running.~
EDIT:
Scratch all of that 😬
A better workaround would be to set the extra_hosts
at the Connect sidecar task instead of your main task, so something like this from your example:
job "freeipa" {
datacenters = ["dc1"]
group "freeipa" {
network {
mode = "bridge"
}
service {
name = "freeipa"
port = "443"
connect {
sidecar_service {}
+ sidecar_task {
+ config {
+ extra_hosts = ["freeipa.ingress.dc1.consul:127.0.0.1"]
+ }
+ }
}
}
task "freeipa" {
resources {
memory = 2000
}
driver = "docker"
config {
image = "freeipa/freeipa-server:centos-8"
args = ["ipa-server-install", "-U", "-r", "DC1.CONSUL", "--no-ntp"]
sysctl = {
"net.ipv6.conf.all.disable_ipv6" = "0"
}
- extra_hosts = ["freeipa.ingress.dc1.consul:127.0.0.1"]
}
env {
HOSTNAME = "freeipa.ingress.dc1.consul"
PASSWORD = "testtest"
}
}
}
}
Thanks for the workaround! I closed the issue, but now thinking about it, not sure if I should have left that to you?
Hum...good question. Even though there's a reasonable workaround I think we still need to provide a proper fix for this, so I will keep it open for now 👍
When setting host.docker.internal:host-gateway
in sidecar_task config I get this
failed to build mount for /etc/hosts: invalid IP address "host.docker.internal:host-gateway"
Thank you very much, I had problem adding extra host on /etc/hosts
in docker container and with your comment it's all set.
I need these extra hosts for my TLS setting to address to specific random private domain.
While this is working roughly as intended, I'm going to re-title this and label it as an enhancement. There's probably some discussion to be had about whether the extra_hosts
should always be duplicated or not, but I'll leave that up to whomever picks this up for implementation to figure out. :grinning:
Nomad version
Output from
nomad version
Nomad v1.1.3 (8c0c8140997329136971e66e4c2337dfcf932692)Operating system and Environment details
Rocky Linux 8.4 (Green Obsidian) Docker version 20.10.8, build 3967b7d
Issue
task.extra_hosts
is not propagated into the docker containers/etc/hosts
Reproduction steps
cat /etc/hosts
using eithernomad alloc exec
ordocker exec
on the allocation/containerExpected Result
Actual Result
Job file (if appropriate)
Nomad Server logs (if appropriate)
Nomad Client logs (if appropriate)
See also https://github.com/hashicorp/nomad/issues/7746#issuecomment-898945862