Open oxycash opened 1 year ago
as per https://developer.hashicorp.com/consul/docs/architecture#lan-gossip-pool , if udp is not available agent will fall back to tcp. Does this make the consul client status frequently swing between alive/failed status?
because it is happening with us.
as per https://developer.hashicorp.com/consul/docs/architecture#lan-gossip-pool , if udp is not available agent will fall back to tcp. Does this make the consul client status frequently swing between alive/failed status?
because it is happening with us.
I'm not running on k8s but just inside a Docker container but have the same issue.
@soupdiver if using hostnetwork is fine for your requirement, it will work. Else it's gonna be a problem. You can also try advertising node ip instead of pod IP, that means only one consul container per node.
@soupdiver if using hostnetwork is fine for your requirement, it will work. Else it's gonna be a problem. You can also try advertising node ip instead of pod IP, that means only one consul container per node.
yea using host network works but what is the underlying issue? Even if I expose the serf lan port via tcp and udp the error shows up.
As per my deep dive, docker has limitations on how UDP works.
Same issue here, native 3 servers without any docker or VM in between. I've tested connections with nc and all good. But problem with consul persist.
I have the same problem with consul servers running on k8s and consul clients outside of k8s with docker. the problem was related to the docker limitation with UDP. The only workaround I found was running docker clients with hostNetwork: true
option.
Community Note
Overview of the Issue
Unable to connect Agents running on K8s to external Consul Servers which are running directly on VMs. We are not using official helm charts as of now.
Reproduction Steps
Install Consul in server mode on VMs (3 nodes).
Client config:
Client Docker Image:
Client K8 Deployment:
Logs
Expected behavior
Consul client should join consul server without errors.
Environment details