Closed ervikrant06 closed 4 years ago
On further checking found that it's exposing the port 8500
ports:
- containerPort: 8500
hostPort: 8500
name: http
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8300
name: server
- containerPort: 8600
name: dns-tcp
protocol: "TCP"
- containerPort: 8600
name: dns-udp
protocol: "UDP"
While trying to join using the 8500 port of minikube machine still I am facing the same issue.
docker@consul1:~$ docker run -d --rm --net=host consul agent --retry-join=192.168.99.100:8500 -bind=192.168.99.101
3089289082ff67004bf42779bef3bb9f24932513006eb5cd1887fbbba8f4eb11
docker@consul1:~$ docker logs 3089289082ff
==> Starting Consul agent...
==> Consul agent running!
Version: 'v1.2.3'
Node ID: '466beff1-bff8-db5c-2ec6-e98a6a740d5e'
Node name: 'consul1'
Datacenter: 'dc1' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 192.168.99.101 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2018/10/06 08:25:08 [INFO] serf: EventMemberJoin: consul1 192.168.99.101
2018/10/06 08:25:08 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2018/10/06 08:25:08 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2018/10/06 08:25:08 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2018/10/06 08:25:08 [INFO] agent: started state syncer
2018/10/06 08:25:08 [INFO] agent: Retry join LAN is supported for: aliyun aws azure digitalocean gce k8s os packet scaleway softlayer triton vsphere
2018/10/06 08:25:08 [INFO] agent: Joining LAN cluster...
2018/10/06 08:25:08 [INFO] agent: (LAN) joining: [192.168.99.100:8500]
2018/10/06 08:25:08 [WARN] manager: No servers available
2018/10/06 08:25:08 [ERR] agent: failed to sync remote state: No known Consul servers
But I can see the list of consul servers from the K8 consul client.
kubectl exec -it consul-df8js consul members
Node Address Status Type Build Protocol DC Segment
consul-server-0 172.17.0.13:8301 alive server 1.2.2 2 dc1 <all>
consul-server-1 172.17.0.12:8301 alive server 1.2.2 2 dc1 <all>
consul-server-2 172.17.0.11:8301 alive server 1.2.2 2 dc1 <all>
consul-df8js 172.17.0.10:8301 alive client 1.2.2 2 dc1 <default>
NodePort services expose ports starting at 30k as far as I recall
@mmisztal1980 Yes, in this case by default consul client deployed using helm is using hostport but using the minikube machine and hostport 8500 I can't join the external (running outside of K8) consul client with consul running on k8.
Currently, the client agents -advertise
the POD_IP
, which means that you'll have to route to the pod IP to have them join any cluster. It's possible we can add a configuration to advertise the HOST_IP instead, in which case we'd have to also expose all ports via hostPort
(or hostNetwork
). I expect in real environments both scenarios exist: where pod IPs are routable, where only host IPs are routable.
Are you trying to have a Consul cluster created with nodes outside of the K8S cluster? Just trying to understand what you're trying to do.
@mitchellh Sorry for late response.
consul cluster is running on top of kubernetes minikube machine hosted on Mac. Started another docker-machine and confirmed that from docker machine I am able to reach the minikube setup. Started consul client agent docker container on docker-machine and I want this consul client agent docker to join the consul cluster running on K8.
After deploying consul cluster using helm. Created one service for consul client.
kubectl describe svc consulclientsvc
Name: consulclientsvc
Namespace: default
Labels: run=consul-client-svc
Annotations: <none>
Selector: app=consul,component=client
Type: NodePort
IP: 10.106.109.176
Port: <unset> 8500/TCP
TargetPort: 8500/TCP
NodePort: <unset> 30938/TCP
Endpoints: 172.17.0.11:8500
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Confirmed from docker-machine that I can hit the service.
docker@consulclient1:~$ curl 192.168.99.100:30938/v1/agent/members
[{"Name":"minikube","Addr":"172.17.0.11","Port":8301,"Tags":{"build":"1.3.0:e8757838","dc":"minidc","id":"ff1b876c-bcc5-08a7-1c8e-be73444b7fab","role":"node","segment":"","vsn":"2","vsn_max":"3","vsn_min":"2"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4},{"Name":"consulconnect1-server-0","Addr":"172.17.0.12","Port":8301,"Tags":{"bootstrap":"1","build":"1.3.0:e8757838","dc":"minidc","id":"6e3c66e2-071a-24b1-2ed2-a89bc7063693","port":"8300","raft_vsn":"3","role":"consul","segment":"","vsn":"2","vsn_max":"3","vsn_min":"2","wan_join_port":"8302"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4}]
but still when trying to consul agent on docker-machine i am getting the following error not sure what's going wrong.
docker@consulclient1:~$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' -v /home/docker/conf.json:/consul/config/config.json --name=consulagent1 consul agent --retry-join=19
2.168.99.100:30938 -bind=192.168.99.102
ac4557e427333fa38fd437e1079cb091206b171db47f4351068779d314d06334
docker@consulclient1:~$ docker logs ac4557e427333fa38fd437e1079cb091206b171db47f4351068779d314d06334
==> Starting Consul agent...
==> Consul agent running!
Version: 'v1.4.0'
Node ID: 'bdbf06aa-8721-19ad-d576-032056f13d8e'
Node name: 'consulclient1'
Datacenter: 'minidc' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 192.168.99.102 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2018/11/25 08:10:44 [INFO] serf: EventMemberJoin: consulclient1 192.168.99.102
2018/11/25 08:10:44 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2018/11/25 08:10:44 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2018/11/25 08:10:44 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2018/11/25 08:10:44 [INFO] agent: started state syncer
2018/11/25 08:10:44 [INFO] agent: Retry join LAN is supported for: aliyun aws azure digitalocean gce k8s os packet scaleway softlayer triton vsphere
2018/11/25 08:10:44 [INFO] agent: Joining LAN cluster...
2018/11/25 08:10:44 [INFO] agent: (LAN) joining: [192.168.99.100:30938]
2018/11/25 08:10:44 [WARN] manager: No servers available
2018/11/25 08:10:44 [ERR] agent: failed to sync remote state: No known Consul servers
2018/11/25 08:10:44 [INFO] agent: (LAN) joined: 0 Err: 1 error(s) occurred:
* Failed to join 192.168.99.100: received invalid msgType (72), expected pushPullMsg (6) from=192.168.99.100:30938
2018/11/25 08:10:44 [WARN] agent: Join LAN failed: <nil>, retrying in 30s
Currently you need your Pod IPs to be routable from wherever else you're running your Consul agents.
External servers are now supported thanks to https://github.com/hashicorp/consul-helm/pull/289
It's mentioned in the guide that consul client will expose the port 8500 on host machine but after deploying the consul using helm chart client is not exposing any port on host machine.
I would suggest to run this in host mode so that client ports are available for other clients which are outside K8 to connect with consul running on K8.
I tried to create the service using NodePort so that I can use that port to connect my external consul client with through the consul client which is running as DS in K8 but no luck.
But when the external consul client is trying to register with the consul running in k8 it's getting failed with the following error.