hashicorp / consul-helm

Helm chart to install Consul and other associated components.
Mozilla Public License 2.0
419 stars 386 forks source link

Service mesh set-up using Helm #523

Closed viswanath7 closed 4 years ago

viswanath7 commented 4 years ago

Hello,

I'm following the documentation on service mesh using helm and trying to consume an exposed service (namely greeter) from another (namely consumer).

I'm using the following kubernetes specifications for my experiment

  1. app-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: develop

  1. greeter.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: greeter
    namespace: develop
    # A service account provides an identity for processes that run in a Pod.
    # Processes in containers inside pods contact the api-server, using a 'service account' for authentication
    # See https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
    ---
    apiVersion: v1
    kind: Pod
    metadata:
    name: greeter
    namespace: develop
    labels:
    app: greeter
    annotations:
    # Connect sidecar running Envoy proxy is automatically injected into pods. Connect sidecar can both accept and establish connections.
    # Connect sidecar listens on a random port registered with Consul and proxies valid inbound connections to port 9001 in the pod.
    #  To establish a connection to the pod using Connect, a client must use another Connect proxy.
    # The client Connect proxy will use Consul service discovery to find all available upstream proxies and their public ports.
    "consul.hashicorp.com/connect-inject": "true"
    # Name of the service that is being served. This pod accepts inbound connections.
    # If using ACLs, this must be the same name as the Pod's ServiceAccount
    "consul.hashicorp.com/connect-service": "greeter"
    # For pods that accept inbound connections, this specifies the port to which inbound connections must be routed.
    # It's the port of the service to which the proxy public listener will listen, on a dynamic port.
    "consul.hashicorp.com/connect-service-port": "greeter-port"
    spec:
    containers:
    # The service name registered in Consul. This can be customised with the `consul.hashicorp.com/connect-service` annotation.
    # If using ACLs, this name must be the same as the Pod's ServiceAccount name.
    - name: greeter
      image: bloque/greeter
      imagePullPolicy: IfNotPresent
      ports:
        - name: greeter-port
          containerPort: 9090
      resources:
        limits:
          memory: "200Mi"
        requests:
          memory: "100Mi"
      readinessProbe:
        httpGet:
          port: greeter-port
          path: /health-check
        initialDelaySeconds: 5
        timeoutSeconds: 30
      livenessProbe:
        httpGet:
          port: greeter-port
          path: /health-check
        periodSeconds: 60
        initialDelaySeconds: 5
        timeoutSeconds: 30
        successThreshold: 1
        failureThreshold: 3
    serviceAccountName: greeter

  1. http-consumer.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: consumer
  namespace: develop
---
apiVersion: v1
kind: Pod
metadata:
  name: consumer
  namespace: develop
  labels:
    app: consumer
  annotations:
    "consul.hashicorp.com/connect-inject": "true"
    # List of upstream services that this pod needs to connect to via connect, along with a static local port to listen for those connections.
    # When a connection is established to that local port, the proxy establishes a connection to the target service ('greeter')
    # using mutual TLS and identifying itself as the source service ('consumer').
    "consul.hashicorp.com/connect-service-upstreams": "greeter:9001"
spec:
  containers:
    - name: consumer
      image: bloque/http-client:1
      imagePullPolicy: IfNotPresent
      env:
        - name: SERVICE_BASE
          # As the connect proxy will be injected into this pod, one can refer to it with localhost
          value: "http://localhost:9001"
        - name: SERVICE_PATH
          value: "/joke"
  serviceAccountName: consumer

When I apply these files using kubectl apply -f ./<directory-containing-files>/ by placing them in a directory of choice, I notice the following

❯ kubectl get pods
NAME                                                          READY   STATUS             RESTARTS   AGE
consul-connect-injector-webhook-deployment-696d6559f6-zd27b   1/1     Running            0          47h
consul-m5cpn                                                  1/1     Running            0          47h
consul-server-0                                               1/1     Running            0          47h
consumer                                                      2/3     CrashLoopBackOff   8          17m
greeter                                                       3/3     Running            0          17m

In short, the proxies are created for both the pods. greeter accepts incoming requests but when consumer tries to reach greeter using its local proxy for upstream service, I witness the following error in the logs

❯ kubectl logs -f consumer -c consumer
[2020-07-01 19:51:44,172] [INFO] [com.example.application.Main] [ioapp-compute-1] [] - Fetching response from URL http://localhost:9001/joke...
java.net.SocketException: Unexpected end of file from server
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:481)
    at java.base/sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1977)
    at java.base/sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1972)
    at java.base/java.security.AccessController.doPrivileged(AccessController.java:554)
    at java.base/sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1971)
    at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1539)
    at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1523)
    at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527)
    at scalaj.http.HttpRequest.doConnection(Http.scala:367)
    at scalaj.http.HttpRequest.exec(Http.scala:343)
    at scalaj.http.HttpRequest.asString(Http.scala:492)
    at com.example.application.Main$.$anonfun$run$3(Main.scala:32)
    at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:87)
    at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:359)
    at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:380)
    at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:323)
    at cats.effect.internals.IOShift$Tick.run(IOShift.scala:35)
    at cats.effect.internals.PoolUtils$$anon$2$$anon$3.run(PoolUtils.scala:52)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
    at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: java.net.SocketException: Unexpected end of file from server
    at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:866)
    at java.base/sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:689)
    at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:863)
    at java.base/sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:689)
    at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1618)
    at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1523)
    at scalaj.http.HttpRequest.doConnection(Http.scala:365)
    ... 12 more

I'm struggling to spot the flaw in my configuration to have a working set-up. Can you please help?

ishustava commented 4 years ago

Hey @viswanath7

Your configuration looks correct to me.

I think the underlying issue you're seeing here is a race condition. It looks like the consumer app tries to talk to the greeter app at startup, but since it and the envoy sidecar both start at the same time, it's likely that Envoy hasn't started the listener on localhost yet.

It would be interesting to compare the timestamp of where you're seeing this failure with the logs in the Envoy container. You could retrieve Envoy logs by running kubectl logs -f consumer -c consul-connect-envoy-sidecar.

viswanath7 commented 4 years ago

Hi @ishustava,

Thank you for your prompt reply. Just wondering, wouldn't the restarts help in this scenario? What is the ideal way to handle this situation? Would you suggest starting greeter first and then starting the consumer?

Please find below the relevant logs that you requested

[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:251] initializing epoch 0 (hot restart version=11.104)
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:253] statically linked extensions:
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.hystrix, envoy.statsd
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.previous_hosts
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.opencensus, envoy.tracers.xray, envoy.zipkin
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_stats, envoy.filters.http.header_to_metadata, envoy.filters.http.jwt_authn, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.rbac, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.resolvers: envoy.ip
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.health_checkers: envoy.health_checkers.redis
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.filters.listener: envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.dubbo_proxy.protocols: dubbo
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.udp_listeners: raw_udp_listener
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.dubbo_proxy.serializers: dubbo.hessian2
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.dubbo_proxy.route_matchers: default
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.filters.udp_listener: envoy.filters.udp_listener.udp_proxy
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.thrift_proxy.transports: auto, framed, header, unframed
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.dubbo_proxy, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mysql_proxy, envoy.filters.network.rbac, envoy.filters.network.sni_cluster, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.access_loggers: envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log
[2020-07-01 22:50:10.366][1][info][main] [source/server/server.cc:255]   envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2020-07-01 22:50:10.377][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cluster.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.377][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.Cluster.tls_context' from file cluster.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.381][1][info][main] [source/server/server.cc:336] admin address: 127.0.0.1:19000
[2020-07-01 22:50:10.382][1][info][main] [source/server/server.cc:455] runtime: layers:
  - name: static_layer
    static_layer:
      envoy.deprecated_features:envoy.api.v2.Cluster.tls_context: true
      envoy.deprecated_features:envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager.Tracing.operation_name: true
      envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1: true
[2020-07-01 22:50:10.382][1][info][config] [source/server/configuration_impl.cc:62] loading 0 static secret(s)
[2020-07-01 22:50:10.382][1][info][config] [source/server/configuration_impl.cc:68] loading 1 cluster(s)
[2020-07-01 22:50:10.406][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:167] cm init: initializing cds
[2020-07-01 22:50:10.411][1][info][config] [source/server/configuration_impl.cc:72] loading 0 listener(s)
[2020-07-01 22:50:10.411][1][info][config] [source/server/configuration_impl.cc:97] loading tracing configuration
[2020-07-01 22:50:10.411][1][info][config] [source/server/configuration_impl.cc:116] loading stats sink configuration
[2020-07-01 22:50:10.412][1][info][main] [source/server/server.cc:550] starting main dispatch loop
[2020-07-01 22:50:10.563][1][info][upstream] [source/common/upstream/cds_api_impl.cc:74] cds: add 2 cluster(s), remove 1 cluster(s)
[2020-07-01 22:50:10.573][1][info][upstream] [source/common/upstream/cds_api_impl.cc:90] cds: add/update cluster 'local_app'
[2020-07-01 22:50:10.573][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.Cluster.tls_context' from file cluster.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.588][1][info][upstream] [source/common/upstream/cds_api_impl.cc:90] cds: add/update cluster 'greeter.default.dc1.internal.f66b7a8f-0cd4-67b8-505b-6f1beb2de8a3.consul'
[2020-07-01 22:50:10.588][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:145] cm init: initializing secondary clusters
[2020-07-01 22:50:10.662][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:171] cm init: all clusters initialized
[2020-07-01 22:50:10.663][1][info][main] [source/server/server.cc:529] all clusters initialized. initializing init manager
[2020-07-01 22:50:10.665][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.FilterChain.tls_context' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.665][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.665][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.666][1][info][upstream] [source/server/lds_api.cc:73] lds: add/update listener 'public_listener:10.1.0.226:20000'
[2020-07-01 22:50:10.666][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.667][1][info][upstream] [source/server/lds_api.cc:73] lds: add/update listener 'greeter:127.0.0.1:9001'
[2020-07-01 22:50:10.667][1][info][config] [source/server/listener_manager_impl.cc:707] all dependencies initialized. starting workers
[2020-07-01 22:50:10.969][1][info][upstream] [source/common/upstream/cds_api_impl.cc:74] cds: add 2 cluster(s), remove 1 cluster(s)
[2020-07-01 22:50:10.969][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.Cluster.tls_context' from file cluster.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.971][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.FilterChain.tls_context' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.971][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.971][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:10.971][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:56.736][1][info][upstream] [source/common/upstream/cds_api_impl.cc:74] cds: add 2 cluster(s), remove 1 cluster(s)
[2020-07-01 22:50:56.736][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.Cluster.tls_context' from file cluster.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:56.738][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.FilterChain.tls_context' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:56.738][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:56.738][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-01 22:50:56.738][1][warning][misc] [source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.api.v2.listener.Filter.config' from file listener_components.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.

Also, just in case if it's useful, the helm configuration used is as shown below

consul-connect-configuration.yml

global:
  name: consul
  tls:
    enabled: true
    # This configuration sets `verify_outgoing`, `verify_server_hostname`,
    # and `verify_incoming` to `false` on servers and clients,
    # which allows TLS-disabled nodes to join the cluster.
    verify: false
  acls:
    # The setting 'manageSystemACLs' will trigger a Kubernetes job called 'server-acl-init' that will
    # create the bootstrap token, along with other ACLs. Upon completion of job, one should be able to
    # retrieve the bootstrap token from the Kubernetes secret 'consul-bootstrap-acl-token'.
    manageSystemACLs: true
# Enable Connect for secure communication between nodes
connectInject:
  # All the services using Connect will automatically be registered in the Consul catalog
  enabled: true
  k8sAllowNamespaces: ['default', 'develop']
  # The deny list takes precedence over the allow list.
  k8sDenyNamespaces: []
client:
  enabled: true
  updateStrategy: |
    type: RollingUpdate
# Use only one Consul server for local development
server:
  replicas: 1
  bootstrapExpect: 1
  disruptionBudget:
    enabled: true
    maxUnavailable: 0
  # Setting it to a non-zero value will indicate that an upgrade of Consul cluster is in progress
  updatePartition: 0
viswanath7 commented 4 years ago

Update:

Not starting greeter and consumer simultaneously doesn't resolve the problem either. As shown below, I started greeter first and after a significant delay, I started consumer.

❯ kubectl apply -f ./services/greeter.yaml
serviceaccount/greeter unchanged
pod/greeter created
❯ kubectl apply -f ./services/http-consumer.yaml
serviceaccount/consumer unchanged
pod/consumer created
❯ kubectl get pods
NAME                                                          READY   STATUS     RESTARTS   AGE
consul-connect-injector-webhook-deployment-696d6559f6-zd27b   1/1     Running    0          2d2h
consul-m5cpn                                                  1/1     Running    0          2d2h
consul-server-0                                               1/1     Running    0          2d2h
consumer                                                      0/3     Init:0/1   0          11s
greeter                                                       3/3     Running    0          69s
❯ kubectl get pods
NAME                                                          READY   STATUS             RESTARTS   AGE
consul-connect-injector-webhook-deployment-696d6559f6-zd27b   1/1     Running            0          2d2h
consul-m5cpn                                                  1/1     Running            0          2d2h
consul-server-0                                               1/1     Running            0          2d2h
consumer                                                      2/3     CrashLoopBackOff   1          29s
greeter                                                       3/3     Running            0          87s
ishustava commented 4 years ago

Hey @viswanath7

Thanks so much for providing this info. Looking at your Helm config file helped me figure out what the problem is!

When you have ACLs enabled, by default Consul will not allow any network traffic until you create intentions.

To create an intention follow these steps:

First, set env variables so you can talk to Consul via CLI:

export CONSUL_HTTP_ADDR=https://localhost:8501
export CONSUL_HTTP_SSL_VERIFY=false
export CONSUL_HTTP_TOKEN=$(kubectl get secret consul-bootstrap-acl-token -o jsonpath={.data.token}| base64 -D)

Then port-forward consul server pod:

kubectl port-forward consul-server-0 8501

Finally, create the intention that allows traffic from consumer to greeter:

consul intention create consumer greeter

Your consumer pod should recover on its own.

viswanath7 commented 4 years ago

Thank you very much indeed. Creating an intention to allow traffic solved my problem.