Closed moris1amar closed 4 years ago
Did you figure this one out @moris1amar ?
I am using the consul-helm chart to apply ONLY the client to a k8s cluster. The Servers are running on stand alone VMs, I’m just trying to add one consul client per k8s node. I cannot get bi-directional traffic flowing: I can get the new consul clients to register and join the cluster, but the cluster cannot talk back to the new consul clients.
My Existing Consul Clusters are 3 VMs at 10.0.0.4,10.0.0.5 and 10.0.0.6. My Kubernetes Nodes are: 10.1.48.4,10.1.48.5 and 10.1.48.6 The Kubernetes pods addresses are 10.254.2.*
The new k8s conus client pod can connect 10.254.2.* --> 10.0.0.4 (existing Server VM) and register Shortly afterward I get error logs flooded with "Refuting a suspect message” and the new client starts to flag offline and on. The consul servers are trying to contact the new consul clients and cannot make a connection.
The Consul server cluster 10.0.0.4 cannot talk to the pod address range (in kubernetes) of 10.254.2.* , there needs to be an ingress rule for this to happen as POD IPs in my situation are not directly addressable on the network.
The Helm chart always assigns -advertise as the POD_Address, which in my case is not directly inward routable. The new consul clients are advertising a POD 10.254.2.* not a ‘real’ ip address. I don’t see any support for changing the -advertise parameter, nor do I see signs of the chart being able to create ingress rules for me.
Am I going up the wrong tree with consul helm chart and trying to have agents talk to external consul servers?
It would appear that adding hostPort and changing the --advertise from POD_ADDR to HOST_ADDR would fix the problem, and I might try it on my own chart.
Hi Everyone, Using external servers is not well supported in the helm chart right now. You have to make some manual patches to the chart and I haven't had a chance to test this out so there may be other issues:
You'll need to ensure that the Consul Clients in the daemonset are registering with the Node IP rather than the Pod IP and you'll need each Kube node to be routable from your Consul servers. You have to edit the chart yourself right now because this is not configurable
You'll also need to ensure the client's port 8301 is a hostPort because that's what the servers will communicate with them on (see https://www.consul.io/docs/internals/architecture.html#10-000-foot-view)
I have it working (for me at least). --advertise set to HOST_IP and 8301 and 8302 set to hostport. Nodes are joining and staying connected. I can run the tests and do a PR if you would like.
Nice! Yeah that would be great.
I'm going to track this in https://github.com/hashicorp/consul-helm/issues/253 so please use that issue, thanks!
Hi, I want to deploy a consul agent on Kubernetes (EKS Cluster) in node level, so I expect that all pods on node will run this agent and join an external consul server running on AWS, including DNS and service. Thus, I have overridden the values.yaml file only for client enable like that:
and this is the result:
I note that the pods are not recognize by my external consul master.
What could be the reason?
Thanks, Maurice