hashicorp / consul-k8s

First-class support for Consul Service Mesh on Kubernetes
https://www.consul.io/docs/k8s
Mozilla Public License 2.0
667 stars 316 forks source link

[v0.19.0] Empty response from server over Consul Connect #252

Closed ghost closed 4 years ago

ghost commented 4 years ago

I've recently rolled out a new GKE cluster, and installed the latest v0.19.0 consul on k8s helm chart with helm 3.

Alongside this, I've followed the steps documented here to install Elastic Cloud Kubernetes.

I've combind the advice found on this section of the docs, noting the need to turn of TLS on the HTTP layer, with your docs here, adding the only annotation, the final result looking like so:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: strix
spec:
  version: 7.6.2
  updateStrategy:
    changeBudget:
      maxSurge: 2
      maxUnavailable: 1
  http:
    tls:
      selfSignedCertificate:
        disabled: true
      service:
        spec:
          type: NodePort
  nodeSets:
  - name: elasticsearch
    count: 1
    config:
      node.master: true
      node.data: true
      xpack.security.enabled: false
      node.ingest: true
    volumeClaimTemplates:
      - metadata:
          name: strix-es-data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 50Gi
          storageClassName: standard
    env:
      - name: ES_JAVA_OPTS
        value: "-Xms4g -Xmx4g"
    resources:
      requests:
        memory: 4Gi
        cpu: 0.5
      limits:
        memory: 4Gi
        cpu: 2
    podTemplate:
      metadata:
        annotations:
          consul.hashicorp.com/connect-inject: "true"
      spec:
        initContainers:
          - name: sysctl
            securityContext:
              privileged: true
            command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']

You'll note the addition of the "connect inject" annotation which should be all thats required, as well as turning off TLS.

Appying the manifest results in a healthy cluster, and port-forwarding the service (NodePort) to my machine and curling the endpoint returns a healthy cluster.

I then continued to follow the advice in the Connect docs to be able to connect from another service, applying this manifest:

apiVersion: v1
kind: Pod
metadata:
  name: strix-test
  annotations:
    consul.hashicorp.com/connect-inject: "true"
    consul.hashicorp.com/connect-service-upstreams: "consul-elasticsearch:1234,static-server:1235"
spec:
  containers:
    # This name will be the service name in Consul.
    - name: static-client
      image: tutum/curl:latest
      # Just spin & wait forever, we'll use `kubectl exec` to demo
      command: [ "/bin/sh", "-c", "--" ]
      args: [ "while true; do sleep 30; done;" ]
---
apiVersion: v1
kind: Pod
metadata:
  name: static-server
  annotations:
    consul.hashicorp.com/connect-inject: "true"
spec:
  containers:
    # This name will be the service name in Consul.
    - name: static-server
      image: hashicorp/http-echo:latest
      args:
        - -text="hello world"
        - -listen=:8080
      ports:
        - containerPort: 8080
          name: http

Testing the example service works without issue, proving to a limited sense that Connect has been installed successfully.

However attempting to connect to ElasticSearch returns either Error 52, Empty Response, or 56 Connection Reset forwarded connect

Consul values.yaml:

# Available parameters and their default values for the Consul chart.

# global holds values that affect multiple components of the chart.
global:
  # enabled is the master enabled/disabled setting.
  # If true, servers, clients, Consul DNS and the Consul UI will be enabled.
  # Each component can override this default via its component-specific
  # "enabled" config.
  # If false, no components will be installed by default and per-component
  # opt-in is required, such as by setting `server.enabled` to true.
  enabled: true

  # name sets the prefix used for all resources in the helm chart.
  # If not set, the prefix will be "<helm release name>-consul".
  name: null

  # domain is the domain Consul will answer DNS queries for
  # (see https://www.consul.io/docs/agent/options.html#_domain) and the domain
  # services synced from Consul into Kubernetes will have,
  # e.g. `service-name.service.consul`.
  domain: consul

  # image is the name (and tag) of the Consul Docker image for clients and
  # servers. This can be overridden per component.
  # This should be pinned to a specific version tag, otherwise you may
  # inadvertently upgrade your Consul version.
  #
  # Examples:
  #   # Consul 1.5.0
  #   image: "consul:1.5.0"
  #   # Consul Enterprise 1.5.0
  #   image: "hashicorp/consul-enterprise:1.5.0-ent"
  image: "consul:1.7.2"

  # array of objects containing image pull secret names that will be applied to
  # each service account.
  # This can be used to reference image pull secrets if using
  # a custom consul or consul-k8s Docker image.
  # See https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry.
  #
  # Example:
  #   imagePullSecrets:
  #   - name: pull-secret-name
  #   - name: pull-secret-name-2
  imagePullSecrets: []

  # imageK8S is the name (and tag) of the consul-k8s Docker image that
  # is used for functionality such as catalog sync. This can be overridden
  # per component.
  # Note: support for the catalog sync's liveness and readiness probes was added
  # to consul-k8s 0.6.0. If using an older consul-k8s version, you may need to
  # remove these checks to make the sync work.
  # If using acls.manageSystemACLs then must be >= 0.10.1.
  # If using connect inject then must be >= 0.10.1.
  # If using Consul Enterprise namespaces, must be >= 0.12.
  imageK8S: "hashicorp/consul-k8s:0.13.0"

  # datacenter is the name of the datacenter that the agents should register
  # as. This can't be changed once the Consul cluster is up and running
  # since Consul doesn't support an automatic way to change this value
  # currently: https://github.com/hashicorp/consul/issues/1858.
  datacenter: phoenix

  # enablePodSecurityPolicies controls whether pod
  # security policies are created for the Consul components created by this
  # chart. See https://kubernetes.io/docs/concepts/policy/pod-security-policy/.
  enablePodSecurityPolicies: false

  # gossipEncryption configures which Kubernetes secret to retrieve Consul's
  # gossip encryption key from (see https://www.consul.io/docs/agent/options.html#_encrypt).
  # If secretName or secretKey are not set, gossip encryption will not be enabled.
  # The secret must be in the same namespace that Consul is installed into.
  #
  # The secret can be created by running:
  #    kubectl create secret generic consul-gossip-encryption-key \
  #      --from-literal=key=$(consul keygen).
  #
  # In this case, secretName would be "consul-gossip-encryption-key" and
  # secretKey would be "key".
  gossipEncryption:
    # secretName is the name of the Kubernetes secret that holds the gossip
    # encryption key. The secret must be in the same namespace that Consul is installed into.
    secretName: ""
    # secretKey is the key within the Kubernetes secret that holds the gossip
    # encryption key.
    secretKey: ""

  # Enables TLS encryption across the cluster to verify authenticity of the
  # servers and clients that connect. Note: It is HIGHLY recommended that you also
  # enable Gossip encryption.
  # See https://learn.hashicorp.com/consul/security-networking/agent-encryption
  #
  # Note: this relies on functionality introduced with Consul 1.4.1. Make sure
  # your global.image value is at least version 1.4.1.
  tls:
    enabled: true

    # enableAutoEncrypt turns on the auto-encrypt feature on
    # clients and servers.
    # It also switches consul-k8s components to retrieve the CA
    # from the servers via the API.
    # Requires Consul 1.7.1+ and consul-k8s 0.13.0
    enableAutoEncrypt: true

    # serverAdditionalDNSSANs is a list of additional DNS names to
    # set as Subject Alternative Names (SANs) in the server certificate.
    # This is useful when you need to access the Consul server(s) externally,
    # for example, if you're using the UI.
    serverAdditionalDNSSANs: []

    # serverAdditionalIPSANs is a list of additional IP addresses to
    # set as Subject Alternative Names (SANs) in the server certificate.
    # This is useful when you need to access Consul server(s) externally,
    # for example, if you're using the UI.
    serverAdditionalIPSANs: []

    # If verify is true, 'verify_outgoing', 'verify_server_hostname', and
    # 'verify_incoming_rpc' will be set to true for Consul servers and clients.
    # Set this to false to incrementally roll out TLS on an existing Consul cluster.
    # Note: remember to switch it back to true once the rollout is complete.
    # Please see this guide for more details:
    # https://learn.hashicorp.com/consul/security-networking/certificates
    verify: false

    # If httpsOnly is true, Consul will disable the HTTP port on both
    # clients and servers and only accept HTTPS connections.
    httpsOnly: true

    # caCert is a Kubernetes secret containing the certificate
    # of the CA to use for TLS communication within the Consul cluster.
    # If you have generated the CA yourself with the consul CLI,
    # you could use the following command to create the secret in Kubernetes:
    #
    #   kubectl create secret generic consul-ca-cert \
    #           --from-file='tls.crt=./consul-agent-ca.pem'
    caCert:
      secretName: null
      secretKey: null

    # caKey is a Kubernetes secret containing the private key
    # of the CA to use for TLS communications within the Consul cluster.
    # If you have generated the CA yourself with the consul CLI,
    # you could use the following command to create the secret in Kubernetes:
    #
    #   kubectl create secret generic consul-ca-key \
    #           --from-file='tls.key=./consul-agent-ca-key.pem'
    #
    # Note that we need the CA key so that we can generate server and client certificates.
    # It is particularly important for the client certificates since they need to have host IPs
    # as Subject Alternative Names. In the future, we may support bringing your own server
    # certificates.
    caKey:
      secretName: null
      secretKey: null

  # [Enterprise Only] enableConsulNamespaces indicates that you are running
  # Consul Enterprise v1.7+ with a valid Consul Enterprise license and would like to
  # make use of configuration beyond registering everything into the `default` Consul
  # namespace. Requires consul-k8s v0.12+.
  # Additional configuration options are found in the `consulNamespaces` section
  # of both the catalog sync and connect injector.
  enableConsulNamespaces: false

  # [DEPRECATED] Use acls.manageSystemACLs instead.
  bootstrapACLs: false

  # Configure ACLs.
  acls:

    # If true, the Helm chart will automatically manage ACL tokens and policies
    # for all Consul and consul-k8s components. This requires servers to be running inside Kubernetes.
    # Additionally, requires Consul >= 1.4 and consul-k8s >= 0.10.1.
    manageSystemACLs: false

    # If true, an ACL token will be created that can be used in secondary
    # datacenters for replication. This should only be set to true in the
    # primary datacenter since the replication token must be created from that
    # datacenter.
    # In secondary datacenters, the secret needs to be imported from the primary
    # datacenter and referenced via global.acls.replicationToken.
    createReplicationToken: false

    # replicationToken references a secret containing the replication ACL token.
    # This token will be used by secondary datacenters to perform ACL replication
    # and create ACL tokens and policies.
    replicationToken:
      secretName: null
      secretKey: null

# Server, when enabled, configures a server cluster to run. This should
# be disabled if you plan on connecting to a Consul cluster external to
# the Kube cluster.
server:
  enabled: "-"
  image: null
  replicas: 3
  bootstrapExpect: 3 # Should <= replicas count

  # enterpriseLicense refers to a Kubernetes secret that you have created that
  # contains your enterprise license. It is required if you are using an
  # enterprise binary. Defining it here applies it to your cluster once a leader
  # has been elected. If you are not using an enterprise image
  # or if you plan to introduce the license key via another route, then set
  # these fields to null.
  # Note: the job to apply license runs on both Helm installs and upgrades.
  enterpriseLicense:
    secretName: null
    secretKey: null

  # storage and storageClass are the settings for configuring stateful
  # storage for the server pods. storage should be set to the disk size of
  # the attached volume. storageClass is the class of storage which defaults
  # to null (the Kube cluster will pick the default).
  storage: 10Gi
  storageClass: null

  # connect will enable Connect on all the servers, initializing a CA
  # for Connect-related connections. Other customizations can be done
  # via the extraConfig setting.
  connect: true

  # Resource requests, limits, etc. for the server cluster placement. This
  # should map directly to the value of the resources field for a PodSpec,
  # formatted as a multi-line string. By default no direct resource request
  # is made.
  resources: null

  # updatePartition is used to control a careful rolling update of Consul
  # servers. This should be done particularly when changing the version
  # of Consul. Please refer to the documentation for more information.
  updatePartition: 0

  # disruptionBudget enables the creation of a PodDisruptionBudget to
  # prevent voluntary degrading of the Consul server cluster.
  disruptionBudget:
    enabled: true

    # maxUnavailable will default to (n/2)-1 where n is the number of
    # replicas. If you'd like a custom value, you can specify an override here.
    maxUnavailable: null

  # extraConfig is a raw string of extra configuration to set with the
  # server. This should be JSON.
  extraConfig: |
    {}

  # extraVolumes is a list of extra volumes to mount. These will be exposed
  # to Consul in the path `/consul/userconfig/<name>/`. The value below is
  # an array of objects, examples are shown below.
  extraVolumes: []
    # - type: secret (or "configMap")
    #   name: my-secret
    #   load: false # if true, will add to `-config-dir` to load by Consul
    #   items: # optional items array
    #   - key: key
    #     path: path

  # Affinity Settings
  # Commenting out or setting as empty the affinity variable, will allow
  # deployment to single node services such as Minikube
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app: {{ template "consul.name" . }}
              release: "{{ .Release.Name }}"
              component: server
          topologyKey: kubernetes.io/hostname

  # Toleration Settings for server pods
  # This should be a multi-line string matching the Toleration array
  # in a PodSpec.
  tolerations: ""

  # nodeSelector labels for server pod assignment, formatted as a multi-line string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # Example:
  # nodeSelector: |
  #   beta.kubernetes.io/arch: amd64
  nodeSelector: null

  # used to assign priority to server pods
  # ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  priorityClassName: ""

  # Extra annotations to attach to the server pods.
  # This should be a multi-line YAML string.
  # Example:
  #   annotations: |
  #     "annotation-key": "annotation-value"
  annotations: null

  service:
    # Annotations to apply to the server service.
    # Example:
    #   annotations: |
    #     "annotation-key": "annotation-value"
    annotations: null

  # extraEnvVars is a list of extra environment variables to set with the stateful set. These could be
  # used to include proxy settings required for cloud auto-join feature,
  # in case kubernetes cluster is behind egress http proxies. Additionally, it could be used to configure
  # custom consul parameters.
  extraEnvironmentVars: {}
    # http_proxy: http://localhost:3128,
    # https_proxy: http://localhost:3128,
    # no_proxy: internal.domain.com

# Add configuration for Consul servers running externally,
# i.e. outside of Kubernetes.
# This information is required if Consul servers are running
# outside of k8s and you’re setting global.tls.enableAutoEncrypt to true.
externalServers:
  enabled: false

  # HTTPS configuration for external servers.
  # Note: HTTP connections to the servers are
  # not supported.
  https:
    # IP, DNS name, or Cloud auto-join string pointing to the external Consul servers.
    # Note that if you’re providing the cloud auto-join string and multiple addresses
    # can be returned, only the first address will be used.
    # This value is required only if you would like to use
    # a different server address from the one specified
    # in the client.join property.
    address: null

    # The HTTPS port of the server.
    port: 443

    # tlsServerName is the server name to use as the SNI
    # host header when connecting with HTTPS.
    # This property is useful in case ‘externalServers.https.address’
    # is not or can not be included in the server certificate’s SANs.
    tlsServerName: null

    # If true, the Helm chart will ignore the CA set in
    # global.tls.caCert and will rely on the container's
    # system CAs for TLS verification when talking to Consul servers.
    # Otherwise, the chart will use global.tls.caCert.
    useSystemRoots: false

# Client, when enabled, configures Consul clients to run on every node
# within the Kube cluster. The current deployment model follows a traditional
# DC where a single agent is deployed per node.
client:
  enabled: "-"
  image: null
  join: null

  # dataDirectoryHostPath is an absolute path to a directory on the host machine
  # to use as the Consul client data directory.
  # If set to the empty string or null, the Consul agent will store its data
  # in the Pod's local filesystem (which will be lost if the Pod is deleted).
  # Security Warning: If setting this, Pod Security Policies *must* be enabled on your cluster
  # and in this Helm chart (via the global.enablePodSecurityPolicies setting)
  # to prevent other Pods from mounting the same host path and gaining
  # access to all of Consul's data. Consul's data is not encrypted at rest.
  dataDirectoryHostPath: null

  # If true, Consul's gRPC port will be exposed (see https://www.consul.io/docs/agent/options.html#grpc_port).
  # This should be set to true if connectInject or meshGateway is enabled.
  grpc: true

  # exposeGossipPorts exposes the clients' gossip ports as hostPorts.
  # This is only necessary if pod IPs in the k8s cluster are not directly
  # routable and the Consul servers are outside of the k8s cluster. This
  # also changes the clients' advertised IP to the hostIP rather than podIP.
  exposeGossipPorts: false

  # Resource requests, limits, etc. for the client cluster placement. This
  # should map directly to the value of the resources field for a PodSpec,
  # formatted as a multi-line string. By default no direct resource request
  # is made.
  resources: null

  # extraConfig is a raw string of extra configuration to set with the
  # client. This should be JSON.
  extraConfig: |
    {}

  # extraVolumes is a list of extra volumes to mount. These will be exposed
  # to Consul in the path `/consul/userconfig/<name>/`. The value below is
  # an array of objects, examples are shown below.
  extraVolumes: []
    # - type: secret (or "configMap")
    #   name: my-secret
    #   load: false # if true, will add to `-config-dir` to load by Consul

  # Toleration Settings for Client pods
  # This should be a multi-line string matching the Toleration array
  # in a PodSpec.
  # The example below will allow Client pods to run on every node
  # regardless of taints
  # tolerations: |
  #   - operator: "Exists"
  tolerations: ""

  # nodeSelector labels for client pod assignment, formatted as a multi-line string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # Example:
  # nodeSelector: |
  #   beta.kubernetes.io/arch: amd64
  nodeSelector: null

  # Affinity Settings for Client pods, formatted as a multi-line YAML string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  # Example:
  # affinity: |
  #   nodeAffinity:
  #     requiredDuringSchedulingIgnoredDuringExecution:
  #       nodeSelectorTerms:
  #       - matchExpressions:
  #         - key: node-role.kubernetes.io/master
  #           operator: DoesNotExist
  affinity: {}

  # used to assign priority to client pods
  # ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
  priorityClassName: ""

  # Extra annotations to attach to the client pods
  # Example:
  #   annotations: |
  #     "annotation-key": "annotation-value"
  annotations: null

  # extraEnvVars is a list of extra environment variables to set with the pod. These could be
  # used to include proxy settings required for cloud auto-join feature,
  # in case kubernetes cluster is behind egress http proxies. Additionally, it could be used to configure
  # custom consul parameters.
  extraEnvironmentVars: {}
    # http_proxy: http://localhost:3128,
    # https_proxy: http://localhost:3128,
    # no_proxy: internal.domain.com

  # dnsPolicy to use.
  dnsPolicy: null

  # updateStrategy for the DaemonSet.
  # See https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy.
  # This should be a multi-line string mapping directly to the updateStrategy
  # Example:
  #  updateStrategy: |
  #    rollingUpdate:
  #      maxUnavailable: 5
  #    type: RollingUpdate
  updateStrategy: null

  # snapshotAgent contains settings for setting up and running snapshot agents
  # within the Consul clusters. They are required to be co-located with Consul
  # clients, so will inherit the clients' nodeSelector, tolerations and affinity.
  # This is an Enterprise feature only.
  snapshotAgent:
    enabled: false

    # replicas determines how many snapshot agent pods are created
    replicas: 2

    # configSecret references a Kubernetes secret that should be manually created to
    # contain the entire config to be used on the snapshot agent. This is the preferred
    # method of configuration since there are usually storage credentials present.
    # Snapshot agent config details:
    # https://www.consul.io/docs/commands/snapshot/agent.html#config-file-options-
    # To create a secret:
    # https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-using-kubectl-create-secret
    configSecret:
      secretName: null
      secretKey: null

# Configuration for DNS configuration within the Kubernetes cluster.
# This creates a service that routes to all agents (client or server)
# for serving DNS requests. This DOES NOT automatically configure kube-dns
# today, so you must still manually configure a `stubDomain` with kube-dns
# for this to have any effect:
# https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configure-stub-domain-and-upstream-dns-servers
dns:
  enabled: "-"

  # Set a predefined cluster IP for the DNS service.
  # Useful if you need to reference the DNS service's IP
  # address in CoreDNS config.
  clusterIP: null

  # Extra annotations to attach to the dns service
  # This should be a multi-line string of
  # annotations to apply to the dns Service
  annotations: null

ui:
  # True if you want to enable the Consul UI. The UI will run only
  # on the server nodes. This makes UI access via the service below (if
  # enabled) predictable rather than "any node" if you're running Consul
  # clients as well.
  enabled: "-"

  # True if you want to create a Service entry for the Consul UI.
  #
  # serviceType can be used to control the type of service created. For
  # example, setting this to "LoadBalancer" will create an external load
  # balancer (for supported K8S installations) to access the UI.
  service:
    enabled: true
    type: null

    # Annotations to apply to the UI service.
    # Example:
    #   annotations: |
    #     "annotation-key": "annotation-value"
    annotations: null

    # Additional ServiceSpec values
    # This should be a multi-line string mapping directly to a Kubernetes
    # ServiceSpec object.
    additionalSpec: null

# syncCatalog will run the catalog sync process to sync K8S with Consul
# services. This can run bidirectional (default) or unidirectionally (Consul
# to K8S or K8S to Consul only).
#
# This process assumes that a Consul agent is available on the host IP.
# This is done automatically if clients are enabled. If clients are not
# enabled then set the node selection so that it chooses a node with a
# Consul agent.
syncCatalog:
  # True if you want to enable the catalog sync. Set to "-" to inherit from
  # global.enabled.
  enabled: true
  image: null
  default: true # true will sync by default, otherwise requires annotation

  # toConsul and toK8S control whether syncing is enabled to Consul or K8S
  # as a destination. If both of these are disabled, the sync will do nothing.
  toConsul: true
  toK8S: true

  # k8sPrefix is the service prefix to prepend to services before registering
  # with Kubernetes. For example "consul-" will register all services
  # prepended with "consul-". (Consul -> Kubernetes sync)
  k8sPrefix: "consul-"

  # k8sAllowNamespaces is a list of k8s namespaces to sync the k8s services from.
  # If a k8s namespace is not included  in this list or is listed in `k8sDenyNamespaces`,
  # services in that k8s namespace will not be synced even if they are explicitly
  # annotated. Use ["*"] to automatically allow all k8s namespaces.
  #
  # For example, ["namespace1", "namespace2"] will only allow services in the k8s
  # namespaces `namespace1` and `namespace2` to be synced and registered
  # with Consul. All other k8s namespaces will be ignored.
  #
  # To deny all namespaces, set this to [].
  #
  # Note: `k8sDenyNamespaces` takes precedence over values defined here.
  # Requires consul-k8s v0.12+
  k8sAllowNamespaces: ["*"]

  # k8sDenyNamespaces is a list of k8s namespaces that should not have their
  # services synced. This list takes precedence over `k8sAllowNamespaces`.
  # `*` is not supported because then nothing would be allowed to sync.
  # Requires consul-k8s v0.12+.
  #
  # For example, if `k8sAllowNamespaces` is `["*"]` and `k8sDenyNamespaces` is
  # `["namespace1", "namespace2"]`, then all k8s namespaces besides "namespace1"
  # and "namespace2" will be synced.
  k8sDenyNamespaces: ["kube-system", "kube-public"]

  # [DEPRECATED] Use k8sAllowNamespaces and k8sDenyNamespaces instead. For
  # backwards compatibility, if both this and the allow/deny lists are set,
  # the allow/deny lists will be ignored.
  # k8sSourceNamespace is the Kubernetes namespace to watch for service
  # changes and sync to Consul. If this is not set then it will default
  # to all namespaces.
  k8sSourceNamespace: null

  # [Enterprise Only] These settings manage the catalog sync's interaction with
  # Consul namespaces (requires consul-ent v1.7+ and consul-k8s v0.12+).
  # Also, `global.enableConsulNamespaces` must be true.
  consulNamespaces:
    # consulDestinationNamespace is the name of the Consul namespace to register all
    # k8s services into. If the Consul namespace does not already exist,
    # it will be created. This will be ignored if `mirroringK8S` is true.
    consulDestinationNamespace: "default"

    # mirroringK8S causes k8s services to be registered into a Consul namespace
    # of the same name as their k8s namespace, optionally prefixed if
    # `mirroringK8SPrefix` is set below. If the Consul namespace does not
    # already exist, it will be created. Turning this on overrides the
    # `consulDestinationNamespace` setting.
    # `addK8SNamespaceSuffix` may no longer be needed if enabling this option.
    mirroringK8S: false

    # If `mirroringK8S` is set to true, `mirroringK8SPrefix` allows each Consul namespace
    # to be given a prefix. For example, if `mirroringK8SPrefix` is set to "k8s-", a
    # service in the k8s `staging` namespace will be registered into the
    # `k8s-staging` Consul namespace.
    mirroringK8SPrefix: ""

  # addK8SNamespaceSuffix appends Kubernetes namespace suffix to
  # each service name synced to Consul, separated by a dash.
  # For example, for a service 'foo' in the default namespace,
  # the sync process will create a Consul service named 'foo-default'.
  # Set this flag to true to avoid registering services with the same name
  # but in different namespaces as instances for the same Consul service.
  # Namespace suffix is not added if 'annotationServiceName' is provided.
  addK8SNamespaceSuffix: true

  # consulPrefix is the service prefix which prepends itself
  # to Kubernetes services registered within Consul
  # For example, "k8s-" will register all services prepended with "k8s-".
  # (Kubernetes -> Consul sync)
  # consulPrefix is ignored when 'annotationServiceName' is provided.
  # NOTE: Updating this property to a non-null value for an existing installation will result in deregistering
  # of existing services in Consul and registering them with a new name.
  consulPrefix: null

  # k8sTag is an optional tag that is applied to all of the Kubernetes services
  # that are synced into Consul. If nothing is set, defaults to "k8s".
  # (Kubernetes -> Consul sync)
  k8sTag: null

  # syncClusterIPServices syncs services of the ClusterIP type, which may
  # or may not be broadly accessible depending on your Kubernetes cluster.
  # Set this to false to skip syncing ClusterIP services.
  syncClusterIPServices: true

  # nodePortSyncType configures the type of syncing that happens for NodePort
  # services. The valid options are: ExternalOnly, InternalOnly, ExternalFirst.
  # - ExternalOnly will only use a node's ExternalIP address for the sync
  # - InternalOnly use's the node's InternalIP address
  # - ExternalFirst will preferentially use the node's ExternalIP address, but
  #   if it doesn't exist, it will use the node's InternalIP address instead.
  nodePortSyncType: ExternalFirst

  # aclSyncToken refers to a Kubernetes secret that you have created that contains
  # an ACL token for your Consul cluster which allows the sync process the correct
  # permissions. This is only needed if ACLs are enabled on the Consul cluster.
  aclSyncToken:
    secretName: null
    secretKey: null

  # nodeSelector labels for syncCatalog pod assignment, formatted as a multi-line string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # Example:
  # nodeSelector: |
  #   beta.kubernetes.io/arch: amd64
  nodeSelector: null

  # Log verbosity level. One of "trace", "debug", "info", "warn", or "error".
  logLevel: info

  # Override the default interval to perform syncing operations creating Consul services.
  consulWriteInterval: null

# ConnectInject will enable the automatic Connect sidecar injector.
connectInject:
  # True if you want to enable connect injection. Set to "-" to inherit from
  # global.enabled.
  # Requires consul-k8s >= 0.10.1.
  enabled: true
  image: null # image for consul-k8s that contains the injector
  default: false # true will inject by default, otherwise requires annotation

  # The Docker image for Consul to use when performing Connect injection.
  # Defaults to global.image.
  imageConsul: null

  # The Docker image for envoy to use as the proxy sidecar when performing
  # Connect injection. If using Consul 1.7+, the envoy version must be 1.13+.
  # If not set, the image used depends on the consul-k8s version. For
  # consul-k8s 0.12.0 the default is envoyproxy/envoy-alpine:v1.13.0.
  imageEnvoy: null

  # namespaceSelector is the selector for restricting the webhook to only
  # specific namespaces. This should be set to a multiline string.
  # See https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-namespaceselector
  # for more details.
  # Example:
  # namespaceSelector: |
  #   matchLabels:
  #     namespace-label: label-value
  namespaceSelector: null

  # k8sAllowNamespaces is a list of k8s namespaces to allow Connect sidecar
  # injection in. If a k8s namespace is not included or is listed in `k8sDenyNamespaces`,
  # pods in that k8s namespace will not be injected even if they are explicitly
  # annotated. Use ["*"] to automatically allow all k8s namespaces.
  #
  # For example, ["namespace1", "namespace2"] will only allow pods in the k8s
  # namespaces `namespace1` and `namespace2` to have Connect sidecars injected
  # and registered with Consul. All other k8s namespaces will be ignored.
  #
  # To deny all namespaces, set this to [].
  #
  # Note: `k8sDenyNamespaces` takes precedence over values defined here and
  # `namespaceSelector` takes precedence over both since it is applied first.
  # `kube-system` and `kube-public` are never injected, even if included here.
  # Requires consul-k8s v0.12+
  k8sAllowNamespaces: ["*"]

  # k8sDenyNamespaces is a list of k8s namespaces that should not allow Connect
  # sidecar injection. This list takes precedence over `k8sAllowNamespaces`.
  # `*` is not supported because then nothing would be allowed to be injected.
  #
  # For example, if `k8sAllowNamespaces` is `["*"]` and k8sDenyNamespaces is
  # `["namespace1", "namespace2"]`, then all k8s namespaces besides "namespace1"
  # and "namespace2" will be available for injection.
  #
  # Note: `namespaceSelector` takes precedence over this since it is applied first.
  # `kube-system` and `kube-public` are never injected.
  # Requires consul-k8s v0.12+.
  k8sDenyNamespaces: []

  # [Enterprise Only] These settings manage the connect injector's interaction with
  # Consul namespaces (requires consul-ent v1.7+ and consul-k8s v0.12+).
  # Also, `global.enableConsulNamespaces` must be true.
  consulNamespaces:
    # consulDestinationNamespace is the name of the Consul namespace to register all
    # k8s pods into. If the Consul namespace does not already exist,
    # it will be created. This will be ignored if `mirroringK8S` is true.
    consulDestinationNamespace: "default"

    # mirroringK8S causes k8s pods to be registered into a Consul namespace
    # of the same name as their k8s namespace, optionally prefixed if
    # `mirroringK8SPrefix` is set below. If the Consul namespace does not
    # already exist, it will be created. Turning this on overrides the
    # `consulDestinationNamespace` setting.
    mirroringK8S: false

    # If `mirroringK8S` is set to true, `mirroringK8SPrefix` allows each Consul namespace
    # to be given a prefix. For example, if `mirroringK8SPrefix` is set to "k8s-", a
    # pod in the k8s `staging` namespace will be registered into the
    # `k8s-staging` Consul namespace.
    mirroringK8SPrefix: ""

  # The certs section configures how the webhook TLS certs are configured.
  # These are the TLS certs for the Kube apiserver communicating to the
  # webhook. By default, the injector will generate and manage its own certs,
  # but this requires the ability for the injector to update its own
  # MutatingWebhookConfiguration. In a production environment, custom certs
  # should probably be used. Configure the values below to enable this.
  certs:
    # secretName is the name of the secret that has the TLS certificate and
    # private key to serve the injector webhook. If this is null, then the
    # injector will default to its automatic management mode that will assign
    # a service account to the injector to generate its own certificates.
    secretName: null

    # caBundle is a base64-encoded PEM-encoded certificate bundle for the
    # CA that signed the TLS certificate that the webhook serves. This must
    # be set if secretName is non-null.
    caBundle: ""

    # certName and keyName are the names of the files within the secret for
    # the TLS cert and private key, respectively. These have reasonable
    # defaults but can be customized if necessary.
    certName: tls.crt
    keyName: tls.key

  # nodeSelector labels for connectInject pod assignment, formatted as a multi-line string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # Example:
  # nodeSelector: |
  #   beta.kubernetes.io/arch: amd64
  nodeSelector: null

  # aclBindingRuleSelector accepts a query that defines which Service Accounts
  # can authenticate to Consul and receive an ACL token during Connect injection.
  # The default setting, i.e. serviceaccount.name!=default, prevents the
  # 'default' Service Account from logging in.
  # If set to an empty string all service accounts can log in.
  # This only has effect if ACLs are enabled.
  #
  # See https://www.consul.io/docs/acl/acl-auth-methods.html#binding-rules
  # and https://www.consul.io/docs/acl/auth-methods/kubernetes.html#trusted-identity-attributes
  # for more details.
  # Requires Consul >= v1.5 and consul-k8s >= v0.8.0.
  aclBindingRuleSelector: "serviceaccount.name!=default"

  # If not using global.acls.manageSystemACLs and instead manually setting up an
  # auth method for Connect inject, set this to the name of your auth method.
  overrideAuthMethodName: ""

  # aclInjectToken refers to a Kubernetes secret that you have created that contains
  # an ACL token for your Consul cluster which allows the Connect injector the correct
  # permissions. This is only needed if Consul namespaces [Enterprise only] and ACLs
  # are enabled on the Consul cluster and you are not setting
  # `global.acls.manageSystemACLs` to `true`.
  # This token needs to have `operator = "write"` privileges to be able to
  # create Consul namespaces.
  aclInjectToken:
    secretName: null
    secretKey: null

  # Requires Consul >= v1.5 and consul-k8s >= v0.8.1.
  centralConfig:
    # enabled controls whether central config is enabled on all servers and clients.
    # See https://www.consul.io/docs/agent/options.html#enable_central_service_config.
    # If changing this after installation, servers and clients must be restarted
    # for the change to take effect.
    enabled: true

    # defaultProtocol allows you to specify a convenience default protocol if
    # most of your services are of the same protocol type. The individual annotation
    # on any given pod will override this value.
    # Valid values are "http", "http2", "grpc" and "tcp".
    defaultProtocol: null

    # proxyDefaults is a raw json string that will be written as the value of
    # the "config" key of the global proxy-defaults config entry.
    # See: https://www.consul.io/docs/agent/config-entries/proxy-defaults.html
    # NOTE: Changes to this value after the chart is first installed have *no*
    # effect. In order to change the proxy-defaults config after installation,
    # you must use the Consul API.
    proxyDefaults: |
      {}

# Mesh Gateways enable Consul Connect to work across Consul datacenters.
meshGateway:
  # If mesh gateways are enabled, a Deployment will be created that runs
  # gateways and Consul Connect will be configured to use gateways.
  # See https://www.consul.io/docs/connect/mesh_gateway.html
  # Requirements: consul >= 1.6.0 and consul-k8s >= 0.9.0 if using
  # global.acls.manageSystemACLs.
  enabled: true

  # Globally configure which mode the gateway should run in.
  # Can be set to either "remote", "local", "none" or empty string or null.
  # See https://consul.io/docs/connect/mesh_gateway.html#modes-of-operation for
  # a description of each mode.
  # If set to anything other than "" or null, connectInject.centralConfig.enabled
  # should be set to true so that the global config will actually be used.
  # If set to the empty string, no global default will be set and the gateway mode
  # will need to be set individually for each service.
  globalMode: local

  # Number of replicas for the Deployment.
  replicas: 2

  # What gets registered as WAN address for the gateway.
  wanAddress:
    # source configures where to retrieve the WAN address (and possibly port)
    # for the mesh gateway from.
    # Can be set to either: Service, NodeIP, NodeName or Static.
    #
    # Service - Determine the address based on the service type.
    #   If service.type=LoadBalancer use the external IP or hostname of
    #   the service. Use the port set by service.port.
    #   If service.type=NodePort use the Node IP. The port will be set to
    #   service.nodePort so service.nodePort cannot be null.
    #   If service.type=ClusterIP use the ClusterIP. The port will be set to
    #   service.port.
    #   service.type=ExternalName is not supported.
    # NodeIP - The node IP as provided by the Kubernetes downward API.
    # NodeName - The name of the node as provided by the Kubernetes downward
    #   API. This is useful if the node names are DNS entries that
    #   are routable from other datacenters.
    # Static - Use the address hardcoded in meshGateway.wanAddress.static.
    source: "Service"

    # Port that gets registered for WAN traffic.
    # If source is set to "Service" then this setting will have no effect.
    # See the documentation for source as to which port will be used in that
    # case.
    port: 443

    # If source is set to "Static" then this value will be used as the WAN
    # address of the mesh gateways. This is useful if you've configured a
    # DNS entry to point to your mesh gateways.
    static: ""

  # The service option configures the Service that fronts the Gateway Deployment.
  service:
    # Whether to create a Service or not.
    enabled: true

    # Type of service, ex. LoadBalancer, ClusterIP.
    type: LoadBalancer

    # Port that the service will be exposed on.
    # The targetPort will be set to meshGateway.containerPort.
    port: 443

    # Optionally hardcode the nodePort of the service if using type: NodePort.
    # If not set and using type: NodePort, Kubernetes will automatically assign
    # a port.
    nodePort: null

    # Annotations to apply to the mesh gateway service.
    # Example:
    #   annotations: |
    #     "annotation-key": "annotation-value"
    annotations: null

    # Optional YAML string that will be appended to the Service spec.
    additionalSpec: null

  # Envoy image to use. For Consul v1.7+, Envoy version 1.13+ is required.
  imageEnvoy: envoyproxy/envoy:v1.13.0

  # If set to true, gateway Pods will run on the host network.
  hostNetwork: false

  # dnsPolicy to use.
  dnsPolicy: null

  # Override the default 'mesh-gateway' service name registered in Consul.
  # Cannot be used if global.acls.manageSystemACLs is true since the ACL token
  # generated is only for the name 'mesh-gateway'.
  consulServiceName: ""

  # Port that the gateway will run on inside the container.
  containerPort: 8443

  # Optional hostPort for the gateway to be exposed on.
  # This can be used with wanAddress.port and wanAddress.useNodeIP
  # to expose the gateways directly from the node.
  # If hostNetwork is true, this must be null or set to the same port as
  # containerPort.
  # NOTE: Cannot set to 8500 or 8502 because those are reserved for the Consul
  # agent.
  hostPort: null

  # If there are no connect-enabled services running, then the gateway
  # will fail health checks. You may disable health checks as a temporary
  # workaround.
  enableHealthChecks: true

  resources: |
    requests:
      memory: "128Mi"
      cpu: "250m"
    limits:
      memory: "256Mi"
      cpu: "500m"

  # By default, we set an anti affinity so that two gateway pods won't be
  # on the same node. NOTE: Gateways require that Consul client agents are
  # also running on the nodes alongside each gateway Pod.
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app: {{ template "consul.name" . }}
              release: "{{ .Release.Name }}"
              component: mesh-gateway
          topologyKey: kubernetes.io/hostname

  # Optional YAML string to specify tolerations.
  tolerations: null

  # Optional YAML string to specify a nodeSelector config.
  nodeSelector: null

  # Optional priorityClassName.
  priorityClassName: ""

  # Annotations to apply to the mesh gateway deployment.
  # Example:
  #   annotations: |
  #     "annotation-key": "annotation-value"
  annotations: null

# Control whether a test Pod manifest is generated when running helm template.
# When using helm install, the test Pod is not submitted to the cluster so this
# is only useful when running helm template.
tests:
  enabled: true
adilyse commented 4 years ago

Hey @Ares3266,

Could you describe your elasticsearch pod and post that information as well? It might give us some extra information as we work on investigating this.

ghost commented 4 years ago

output of k describe pod/strix-elasticsearch-0

Name:           strix-es-elasticsearch-0
Namespace:      default
Priority:       0
Node:           gke-phoenix-cluster-phoenix-node-pool-d457f538-xhtk/10.0.40.227
Start Time:     Mon, 27 Apr 2020 15:33:17 +0100
Labels:         common.k8s.elastic.co/type=elasticsearch
                controller-revision-hash=strix-es-elasticsearch-67fc7445b9
                elasticsearch.k8s.elastic.co/cluster-name=strix
                elasticsearch.k8s.elastic.co/config-hash=1795391612
                elasticsearch.k8s.elastic.co/http-scheme=http
                elasticsearch.k8s.elastic.co/node-data=true
                elasticsearch.k8s.elastic.co/node-ingest=true
                elasticsearch.k8s.elastic.co/node-master=true
                elasticsearch.k8s.elastic.co/node-ml=true
                elasticsearch.k8s.elastic.co/statefulset-name=strix-es-elasticsearch
                elasticsearch.k8s.elastic.co/version=7.6.2
                statefulset.kubernetes.io/pod-name=strix-es-elasticsearch-0
Annotations:    consul.hashicorp.com/connect-inject: true
                kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container elasticsearch; cpu request for init container sysctl
                update.k8s.elastic.co/timestamp: 2020-04-27T14:34:03.909017302Z
Status:         Running
IP:             172.22.2.4
IPs:            <none>
Controlled By:  StatefulSet/strix-es-elasticsearch
Init Containers:
  elastic-internal-init-filesystem:
    Container ID:  docker://4f6485a3ffc41e5e6f6db149879bd0fb522487fd766838fcd0c9bda1b77bac46
    Image:         docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    Image ID:      docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:59342c577e2b7082b819654d119f42514ddf47f0699c8b54dc1f0150250ce7aa
    Port:          <none>
    Host Port:     <none>
    Command:
      bash
      -c
      /mnt/elastic-internal/scripts/prepare-fs.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 27 Apr 2020 15:34:02 +0100
      Finished:     Mon, 27 Apr 2020 15:34:03 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_IP:     (v1:status.podIP)
      POD_NAME:  strix-es-elasticsearch-0 (v1:metadata.name)
      POD_IP:     (v1:status.podIP)
      POD_NAME:  strix-es-elasticsearch-0 (v1:metadata.name)
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
      /mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
  sysctl:
    Container ID:  docker://fddc4e6d27d40ada7314cb960755b92c7d52f2ec843b5cc85541cb2636ace965
    Image:         docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    Image ID:      docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:59342c577e2b7082b819654d119f42514ddf47f0699c8b54dc1f0150250ce7aa
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      sysctl -w vm.max_map_count=262144
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 27 Apr 2020 15:34:04 +0100
      Finished:     Mon, 27 Apr 2020 15:34:04 +0100
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      POD_IP:     (v1:status.podIP)
      POD_NAME:  strix-es-elasticsearch-0 (v1:metadata.name)
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
      /usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
      /usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Containers:
  elasticsearch:
    Container ID:   docker://b92d1ebc46eeb969d7d8f92af7d5fe8149d0ee137e812038739c1730e73ded20
    Image:          docker.elastic.co/elasticsearch/elasticsearch:7.6.2
    Image ID:       docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:59342c577e2b7082b819654d119f42514ddf47f0699c8b54dc1f0150250ce7aa
    Ports:          9200/TCP, 9300/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Mon, 27 Apr 2020 15:34:05 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  2Gi
    Requests:
      cpu:      100m
      memory:   2Gi
    Readiness:  exec [bash -c /mnt/elastic-internal/scripts/readiness-probe-script.sh] delay=10s timeout=5s period=5s #success=1 #failure=3
    Environment:
      HEADLESS_SERVICE_NAME:     strix-es-elasticsearch
      NSS_SDB_USE_CACHE:         no
      POD_IP:                     (v1:status.podIP)
      POD_NAME:                  strix-es-elasticsearch-0 (v1:metadata.name)
      PROBE_PASSWORD_PATH:       /mnt/elastic-internal/probe-user/elastic-internal-probe
      PROBE_USERNAME:            elastic-internal-probe
      READINESS_PROBE_PROTOCOL:  http
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
      /usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
      /usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  strix-es-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  strix-es-data-strix-es-elasticsearch-0
    ReadOnly:   false
  elasticsearch-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  elasticsearch-data-strix-es-elasticsearch-0
    ReadOnly:   false
  downward-api:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
  elastic-internal-elasticsearch-bin-local:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  elastic-internal-elasticsearch-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  strix-es-elasticsearch-es-config
    Optional:    false
  elastic-internal-elasticsearch-config-local:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  elastic-internal-elasticsearch-plugins-local:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  elastic-internal-http-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  strix-es-http-certs-internal
    Optional:    false
  elastic-internal-probe-user:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  strix-es-internal-users
    Optional:    false
  elastic-internal-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      strix-es-scripts
    Optional:  false
  elastic-internal-transport-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  strix-es-transport-certificates
    Optional:    false
  elastic-internal-unicast-hosts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      strix-es-unicast-hosts
    Optional:  false
  elastic-internal-xpack-file-realm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  strix-es-xpack-file-realm
    Optional:    false
  elasticsearch-logs:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:      
    SizeLimit:   <unset>
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
adilyse commented 4 years ago

It looks like there's a mismatch between the service name registered for Elasticsearch in Consul and the upstream definition. If not specified by an annotation, this defaults to the name of the first container, in this case elasticsearch.

So for your upstream definition, you'll need:

consul.hashicorp.com/connect-service-upstreams: "elasticsearch:1234,static-server:1235"

rather than the consul-elasticsearch.

ghost commented 4 years ago

Updated to reflect those changes, issue persists.

lkysow commented 4 years ago

Can you show us the output of curling on one of the consul servers:

curl localhost:8500/v1/catalog/services

If there's the elasticsearch service, then:

curl localhost:8500/v1/catalog/service/elasticsearch?pretty=true
lkysow commented 4 years ago

Actually, I just realized the pod isn't getting injected. Can you look at the logs of the consul-connect-injector-webhook-deployment

ghost commented 4 years ago

Thats interesting actually, that state has changed since I've opened this issue. There was 3 pods in there, with it getting injected. I'll turn debug on and rebuild the ECK and test deployments

ghost commented 4 years ago

@adilyse you point was actually spot on. Changing the service name to "elasticsearch" after rebuilding the whole deployment has resolved this issue. Good spot. I don't exactly recall why I used the other name, but I suspect it was because that was what it was listed as in the UI list.

Curiously though, why does that return an empty response instead of...well anything else?

Edit: The reason I called it consul-elasticsearch and not just elasticsearch is because I was looking in kubectl get services for the name, and the ExternalName is what I used. Wonder if that might be a nice feature to maybe support in the future?