Closed arunkchow closed 2 years ago
I suspect this is going to be down to how the nomad podman task driver works. Like docker, podman will create a separate network stack for a task group and Nomad has to decide whether to register the podman stack details or the host details when it registers the service with consul. If you look in consul you will likely see the service registration details are for the internal (podman stack) network.
If you update the service definition in your job file to include address_mode: "host"
it might fix things for you :) Note that this might then break health checks as these can have their own address_mode
and don't inherit the service setting - but in your case as you want the host ip and port hopefully it will be fine.
I moved the service health check stanza to group stanza and it started working as expected. Thank you. :)
I have a Nomad Cluster with 3 servers, 3 Clients and a Consul Cluster with 3 servers and Consul Agent running everywhere. I used Fabio as Systemd service. All are running on RHEL8. Regular health checks and services related to Nomad, Fabio and Consul are reporting fine into Consul's UI. The problem I am running into is when I execute a Nomad Job with any container that has an exposed port. In this case a tomcat container with port 8080 exposed inside container. Nomad job syntax below:
On Nomad UI the job ID shows the actual Nomad Client's host IPs and Nomad's Dynamic ports.
However, in Fabio, I see that routes are created pointing to Container's internal IPs and port numbers.
We have an external load balancer with Nomad Clients (because they have Fabio Service on them) configured in pool for accessing the apps and services running on Nomad Clients using urlprefix. Every time I access the Nomad Client's externally advertised address, Fabio forwards the requests to the Container's internal IP and port.
I can access the app or service fine if my request is going to Host A and Fabio's routing table is pointing to the container's IP that happens to be running on Host A at that time as a part of Fabio's round-robin scheme. (Fluke/Random)
However if I try to access the same app/service again, Fabio points the request to either Host B or Host C's container's internal IP from Host A as a part of it's Round-Robin strategy. Since Host A is active on external load balancer's pool serving the requests and also since Host A can't reach an IP of a container running on either Host B or Host C, I get 'Page isn't working' error.
If I keep hitting refresh, I get the page again on 3rd attempt as it points back to the container IP running on the Host A because of round-robin. Any way to make Fabio use only actual host IPs and ports instead of container IPs.
Everything works fine if I am running a raw_exec or java driver based apps using ${NOMAD_PORT_http}. My guess is there's no two networks involved and it's going with the actual host's IPs. However in the case of container, there are two networks involved and it's picking the container network to create routes instead of using the latter.
Please help.