hashicorp / consul-helm

Helm chart to install Consul and other associated components.
Mozilla Public License 2.0
419 stars 386 forks source link

Consul client is not exposing the 8500 port as mentioned in guide #23

Closed ervikrant06 closed 4 years ago

ervikrant06 commented 6 years ago

It's mentioned in the guide that consul client will expose the port 8500 on host machine but after deploying the consul using helm chart client is not exposing any port on host machine.

kubectl describe pod consul-df8js
Name:           consul-df8js
Namespace:      default
Node:           minikube/10.0.2.15
Start Time:     Fri, 05 Oct 2018 19:56:18 +0530
Labels:         app=consul
                chart=consul-0.1.0
                component=client
                controller-revision-hash=53542314
                hasDNS=true
                pod-template-generation=1
                release=consul
Annotations:    consul.hashicorp.com/connect-inject=false
Status:         Running
IP:             172.17.0.10
Controlled By:  DaemonSet/consul
Containers:
  consul:
    Container ID:  docker://6688a53c6d651d3226bfde66a2cdf77ea193721987b2b81ca6c46c8ac0e26bf3
    Image:         consul:1.2.2
    Image ID:      docker-pullable://consul@sha256:8603f0d1b2278364ecb7c11068a477b1ea648df735eda8791362063aba99656a
    Ports:         8500/TCP, 8301/TCP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
    Command:
      /bin/sh
      -ec
      CONSUL_FULLNAME="consul"

exec /bin/consul agent \
  -advertise="${POD_IP}" \
  -bind=0.0.0.0 \
  -client=0.0.0.0 \
  -config-dir=/consul/config \
  -datacenter=dc1 \
  -data-dir=/consul/data \
  -retry-join=${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
  -retry-join=${CONSUL_FULLNAME}-server-1.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
  -retry-join=${CONSUL_FULLNAME}-server-2.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc \
  -domain=consul

    State:          Running
      Started:      Fri, 05 Oct 2018 19:56:54 +0530
    Ready:          True
    Restart Count:  0
    Readiness:      exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader 2>/dev/null | \
grep -E '".+"'
] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_IP:      (v1:status.podIP)
      NAMESPACE:  default (v1:metadata.namespace)
    Mounts:
      /consul/config from config (rw)
      /consul/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-px6mr (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  data:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      consul-client-config
    Optional:  false
  default-token-px6mr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-px6mr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:          <none>

I would suggest to run this in host mode so that client ports are available for other clients which are outside K8 to connect with consul running on K8.

I tried to create the service using NodePort so that I can use that port to connect my external consul client with through the consul client which is running as DS in K8 but no luck.

apiVersion: v1
kind: Service
metadata:
  name: consulclientsvc
  labels:
    run: consulclientsvc
spec:
  type: NodePort
  ports:
  - port: 8500
    targetPort: 8500
    protocol: TCP
    name: consulport
  selector:
    app: consul
    component: client      

kubectl get svc consulclientsvc
NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
consulclientsvc   NodePort   10.100.249.49   <none>        8500:31664/TCP   11s

kubectl get ep consulclientsvc
NAME              ENDPOINTS          AGE
consulclientsvc   172.17.0.10:8500   20s

But when the external consul client is trying to register with the consul running in k8 it's getting failed with the following error.


docker@consul1:~$ docker run -d --rm --net=host consul agent --retry-join=192.168.99.100:31664 -bind=192.168.99.101
80d24b20ea64d3443c9e89912d2cf2b98787bbb1a1d44b0c8aa93896af7aecc2
docker@consul1:~$ docker logs 80d24b20ea64d3443c9e89912d2cf2b98787bbb1a1d44b0c8aa93896af7aecc2
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.3'
           Node ID: '8a7a2c00-1147-dab0-ad5b-306b4273e869'
         Node name: 'consul1'
        Datacenter: 'dc1' (Segment: '')
            Server: false (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 192.168.99.101 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/10/06 08:14:13 [INFO] serf: EventMemberJoin: consul1 192.168.99.101
    2018/10/06 08:14:13 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2018/10/06 08:14:13 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2018/10/06 08:14:13 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
    2018/10/06 08:14:13 [INFO] agent: started state syncer
    2018/10/06 08:14:13 [INFO] agent: Retry join LAN is supported for: aliyun aws azure digitalocean gce k8s os packet scaleway softlayer triton vsphere
    2018/10/06 08:14:13 [INFO] agent: Joining LAN cluster...
    2018/10/06 08:14:13 [INFO] agent: (LAN) joining: [192.168.99.100:31664]
    2018/10/06 08:14:13 [WARN] manager: No servers available
    2018/10/06 08:14:13 [ERR] agent: failed to sync remote state: No known Consul servers
~~
ervikrant06 commented 6 years ago

On further checking found that it's exposing the port 8500

          ports:
            - containerPort: 8500
              hostPort: 8500
              name: http
            - containerPort: 8301
              name: serflan
            - containerPort: 8302
              name: serfwan
            - containerPort: 8300
              name: server
            - containerPort: 8600
              name: dns-tcp
              protocol: "TCP"
            - containerPort: 8600
              name: dns-udp
              protocol: "UDP"

While trying to join using the 8500 port of minikube machine still I am facing the same issue.

docker@consul1:~$ docker run -d --rm --net=host consul agent --retry-join=192.168.99.100:8500 -bind=192.168.99.101
3089289082ff67004bf42779bef3bb9f24932513006eb5cd1887fbbba8f4eb11

docker@consul1:~$ docker logs 3089289082ff
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.3'
           Node ID: '466beff1-bff8-db5c-2ec6-e98a6a740d5e'
         Node name: 'consul1'
        Datacenter: 'dc1' (Segment: '')
            Server: false (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 192.168.99.101 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/10/06 08:25:08 [INFO] serf: EventMemberJoin: consul1 192.168.99.101
    2018/10/06 08:25:08 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2018/10/06 08:25:08 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2018/10/06 08:25:08 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
    2018/10/06 08:25:08 [INFO] agent: started state syncer
    2018/10/06 08:25:08 [INFO] agent: Retry join LAN is supported for: aliyun aws azure digitalocean gce k8s os packet scaleway softlayer triton vsphere
    2018/10/06 08:25:08 [INFO] agent: Joining LAN cluster...
    2018/10/06 08:25:08 [INFO] agent: (LAN) joining: [192.168.99.100:8500]
    2018/10/06 08:25:08 [WARN] manager: No servers available
    2018/10/06 08:25:08 [ERR] agent: failed to sync remote state: No known Consul servers

But I can see the list of consul servers from the K8 consul client.

kubectl exec -it consul-df8js consul members
Node             Address           Status  Type    Build  Protocol  DC   Segment
consul-server-0  172.17.0.13:8301  alive   server  1.2.2  2         dc1  <all>
consul-server-1  172.17.0.12:8301  alive   server  1.2.2  2         dc1  <all>
consul-server-2  172.17.0.11:8301  alive   server  1.2.2  2         dc1  <all>
consul-df8js     172.17.0.10:8301  alive   client  1.2.2  2         dc1  <default>
mmisztal1980 commented 6 years ago

NodePort services expose ports starting at 30k as far as I recall

ervikrant06 commented 6 years ago

@mmisztal1980 Yes, in this case by default consul client deployed using helm is using hostport but using the minikube machine and hostport 8500 I can't join the external (running outside of K8) consul client with consul running on k8.

mitchellh commented 6 years ago

Currently, the client agents -advertise the POD_IP, which means that you'll have to route to the pod IP to have them join any cluster. It's possible we can add a configuration to advertise the HOST_IP instead, in which case we'd have to also expose all ports via hostPort (or hostNetwork). I expect in real environments both scenarios exist: where pod IPs are routable, where only host IPs are routable.

Are you trying to have a Consul cluster created with nodes outside of the K8S cluster? Just trying to understand what you're trying to do.

ervikrant06 commented 5 years ago

@mitchellh Sorry for late response.

consul cluster is running on top of kubernetes minikube machine hosted on Mac. Started another docker-machine and confirmed that from docker machine I am able to reach the minikube setup. Started consul client agent docker container on docker-machine and I want this consul client agent docker to join the consul cluster running on K8.

ervikrant06 commented 5 years ago

After deploying consul cluster using helm. Created one service for consul client.

kubectl describe svc consulclientsvc
Name:                     consulclientsvc
Namespace:                default
Labels:                   run=consul-client-svc
Annotations:              <none>
Selector:                 app=consul,component=client
Type:                     NodePort
IP:                       10.106.109.176
Port:                     <unset>  8500/TCP
TargetPort:               8500/TCP
NodePort:                 <unset>  30938/TCP
Endpoints:                172.17.0.11:8500
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Confirmed from docker-machine that I can hit the service.

docker@consulclient1:~$ curl 192.168.99.100:30938/v1/agent/members
[{"Name":"minikube","Addr":"172.17.0.11","Port":8301,"Tags":{"build":"1.3.0:e8757838","dc":"minidc","id":"ff1b876c-bcc5-08a7-1c8e-be73444b7fab","role":"node","segment":"","vsn":"2","vsn_max":"3","vsn_min":"2"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4},{"Name":"consulconnect1-server-0","Addr":"172.17.0.12","Port":8301,"Tags":{"bootstrap":"1","build":"1.3.0:e8757838","dc":"minidc","id":"6e3c66e2-071a-24b1-2ed2-a89bc7063693","port":"8300","raft_vsn":"3","role":"consul","segment":"","vsn":"2","vsn_max":"3","vsn_min":"2","wan_join_port":"8302"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4}]

but still when trying to consul agent on docker-machine i am getting the following error not sure what's going wrong.

docker@consulclient1:~$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' -v /home/docker/conf.json:/consul/config/config.json --name=consulagent1 consul agent --retry-join=19
2.168.99.100:30938 -bind=192.168.99.102

ac4557e427333fa38fd437e1079cb091206b171db47f4351068779d314d06334
docker@consulclient1:~$ docker logs ac4557e427333fa38fd437e1079cb091206b171db47f4351068779d314d06334
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.4.0'
           Node ID: 'bdbf06aa-8721-19ad-d576-032056f13d8e'
         Node name: 'consulclient1'
        Datacenter: 'minidc' (Segment: '')
            Server: false (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
      Cluster Addr: 192.168.99.102 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/11/25 08:10:44 [INFO] serf: EventMemberJoin: consulclient1 192.168.99.102
    2018/11/25 08:10:44 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2018/11/25 08:10:44 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2018/11/25 08:10:44 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
    2018/11/25 08:10:44 [INFO] agent: started state syncer
    2018/11/25 08:10:44 [INFO] agent: Retry join LAN is supported for: aliyun aws azure digitalocean gce k8s os packet scaleway softlayer triton vsphere
    2018/11/25 08:10:44 [INFO] agent: Joining LAN cluster...
    2018/11/25 08:10:44 [INFO] agent: (LAN) joining: [192.168.99.100:30938]
    2018/11/25 08:10:44 [WARN] manager: No servers available
    2018/11/25 08:10:44 [ERR] agent: failed to sync remote state: No known Consul servers
    2018/11/25 08:10:44 [INFO] agent: (LAN) joined: 0 Err: 1 error(s) occurred:

* Failed to join 192.168.99.100: received invalid msgType (72), expected pushPullMsg (6) from=192.168.99.100:30938
    2018/11/25 08:10:44 [WARN] agent: Join LAN failed: <nil>, retrying in 30s
lkysow commented 5 years ago

Currently you need your Pod IPs to be routable from wherever else you're running your Consul agents.

lkysow commented 4 years ago

External servers are now supported thanks to https://github.com/hashicorp/consul-helm/pull/289