elastic / helm-charts

You know, for Kubernetes
Apache License 2.0
1.88k stars 1.93k forks source link

Deployment failed without error adding slack action to 'keystore:' property in values.yaml #280

Closed dcvtruong closed 4 years ago

dcvtruong commented 5 years ago

Chart version: 7.3.0 Kubernetes version: 1.13.10 Kubernetes provider: E.g. GKE (Google Kubernetes Engine) KUBESPRAY Helm Version: 2.13.1 helm get release output

e.g. helm get elasticsearch (replace elasticsearch with the name of your helm release)

REVISION: 1
RELEASED: Thu Sep 12 01:33:16 2019
CHART: elasticsearch-7.3.0
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: elasticsearch
esConfig:
  elasticsearch.yml: |
    xpack:
      security:
        enabled: false
esJavaOpts: -Xmx5g -Xms5g
esMajorVersion: ""
extraEnvs: []
extraInitContainers: ""
extraVolumeMounts: ""
extraVolumes: ""
fsGroup: ""
fullnameOverride: ""
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.3.0
ingress:
  annotations: {}
  enabled: false
  hosts:
  - chart-example.local
  path: /
  tls: []
initResources: {}
keystore: []
labels: {}
lifecycle: {}
masterService: ""
masterTerminationFix: false
maxUnavailable: 1
minimumMasterNodes: 2
nameOverride: ""
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector: {}
persistence:
  annotations: {}
  enabled: true
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
  fsGroup: 1000
priorityClassName: ""
protocol: http
readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 5
replicas: 3
resources:
  limits:
    cpu: 1000m
    memory: 12Gi
  requests:
    cpu: 100m
    memory: 4Gi
roles:
  data: "true"
  ingest: "true"
  master: "true"
schedulerName: ""
secretMounts: []
securityContext:
  capabilities:
    drop:
    - ALL
  runAsNonRoot: true
  runAsUser: 1000
service:
  annotations: {}
  nodePort: 30998
  type: NodePort
sidecarResources: {}
sysctlInitContainer:
  enabled: true
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
transportPort: 9300
updateStrategy: RollingUpdate
volumeClaimTemplate:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 350Gi
  storageClassName: elkmgmtdevsc

HOOKS:
---
# elasticsearch-weicw-test
apiVersion: v1
kind: Pod
metadata:
  name: "elasticsearch-weicw-test"
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
  - name: "elasticsearch-pjpab-test"
    image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.0"
    command:
      - "sh"
      - "-c"
      - |
        #!/usr/bin/env bash -e
        curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s'
  restartPolicy: Never
MANIFEST:

---
# Source: elasticsearch/templates/poddisruptionbudget.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: "elasticsearch-master-pdb"
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: "elasticsearch-master"
---
# Source: elasticsearch/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: elasticsearch-master-config
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch-7.3.0"
    app: "elasticsearch-master"
data:
  elasticsearch.yml: |
    xpack:
      security:
        enabled: false
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-master
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch-7.3.0"
    app: "elasticsearch-master"
  annotations:
    {}

spec:
  type: NodePort
  selector:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch-7.3.0"
    app: "elasticsearch-master"
  ports:
  - name: http
    protocol: TCP
    port: 9200
    nodePort: 30998
  - name: transport
    protocol: TCP
    port: 9300
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-master-headless
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch-7.3.0"
    app: "elasticsearch-master"
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
  # Create endpoints also if the related pod isn't ready
  publishNotReadyAddresses: true
  selector:
    app: "elasticsearch-master"
  ports:
  - name: http
    port: 9200
  - name: transport
    port: 9300
---
# Source: elasticsearch/templates/statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: elasticsearch-master
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch-7.3.0"
    app: "elasticsearch-master"
  annotations:
    esMajorVersion: "7"
spec:
  serviceName: elasticsearch-master-headless
  selector:
    matchLabels:
      app: "elasticsearch-master"
  replicas: 3
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-master
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 350Gi
      storageClassName: elkmgmtdevsc

  template:
    metadata:
      name: "elasticsearch-master"
      labels:
        heritage: "Tiller"
        release: "elasticsearch"
        chart: "elasticsearch-7.3.0"
        app: "elasticsearch-master"
      annotations:

        configchecksum: cc2628938eb56d53baa8671bfcd8fa6f85ba6b849066b8ee6a70d147b2e8b82
    spec:
      securityContext:
        fsGroup: 1000

      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - "elasticsearch-master"
            topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 120
      volumes:
        - name: esconfig
          configMap:
            name: elasticsearch-master-config
      initContainers:
      - name: configure-sysctl
        securityContext:
          runAsUser: 0
          privileged: true
        image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.0"
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        resources:
          {}

      containers:
      - name: "elasticsearch"
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000

        image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.0"
        imagePullPolicy: "IfNotPresent"
        readinessProbe:
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5

          exec:
            command:
              - sh
              - -c
              - |
                #!/usr/bin/env bash -e
                # If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
                # Once it has started only check that the node itself is responding
                START_FILE=/tmp/.es_start_file

                http () {
                    local path="${1}"
                    if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                      BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                    else
                      BASIC_AUTH=''
                    fi
                    curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
                }

                if [ -f "${START_FILE}" ]; then
                    echo 'Elasticsearch is already running, lets check the node is healthy'
                    http "/"
                else
                    echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
                    if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
                        touch ${START_FILE}
                        exit 0
                    else
                        echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
                        exit 1
                    fi
                fi
        ports:
        - name: http
          containerPort: 9200
        - name: transport
          containerPort: 9300
        resources:
          limits:
            cpu: 1000m
            memory: 12Gi
          requests:
            cpu: 100m
            memory: 4Gi

        env:
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: cluster.initial_master_nodes
            value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,"
          - name: discovery.seed_hosts
            value: "elasticsearch-master-headless"
          - name: cluster.name
            value: "elasticsearch"
          - name: network.host
            value: "0.0.0.0"
          - name: ES_JAVA_OPTS
            value: "-Xmx5g -Xms5g"
          - name: node.data
            value: "true"
          - name: node.ingest
            value: "true"
          - name: node.master
            value: "true"
        volumeMounts:
          - name: "elasticsearch-master"
            mountPath: /usr/share/elasticsearch/data

          - name: esconfig
            mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
            subPath: elasticsearch.yml

Describe the bug: Tried to set the 'keystore: []' property in value and elasticsearch are failing Steps to reproduce:

  1. Set 'keystore: []' in values.yaml
  2. Set 'keystore: []' like this,
keystore:
   - xpack.notification.slack.account.monitoring.url: 'https://webhook_url...'
  1. Deployed failed with no error message as the three pods never gets deploy
  2. Need example to add xpack.notification.slack.account name through keystore

Expected behavior: The key/value format is not clear in the chart Provide logs and/or server output (if relevant): No error or log is returned at the time of 'helm install...' Any additional context:

Crazybus commented 5 years ago
  1. The configuration you are using for the keystore values is not correct. If you take a look at https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md#how-to-use-the-keystore there are a few examples of the formatting.
  2. CHART: elasticsearch-7.3.0 does not yet contain the keystore feature. I'll be releasing a new version of the chart today or tomorrow (still catching up from my vacation)
Crazybus commented 5 years ago

If you want to test it out before I release you will need to deploy directly from the master branch.

dcvtruong commented 5 years ago

@Crazybus Sure. Let me know when the new feature is ready.

Crazybus commented 5 years ago

Did you mean to reopen this issue again? In any case I'll ping you once the next release is live.

Crazybus commented 5 years ago

@Crazybus Sure. Let me know when the new feature is ready.

Latest release has just gone out (7.3.2) which contains this feature.

dcvtruong commented 5 years ago

@crazybus Looks looks like the volumeMounts for keystore is not working. The elasticsearch pods is in CrashLoopBackOffs status never started. Can't seem to get any log from kubectl, journalctl, or docker. Once I comment out the 'keystore: []' the pods works correctly.

$ kubectl get po -n elk -o wide
NAME                       READY   STATUS                  RESTARTS   AGE    IP             NODE                            NOMINATED NODE   READINESS GATES
elasticsearch-master-0     0/1     Init:CrashLoopBackOff   8          21m    10.233.100.5   .....-node-02   <none>           <none>
elasticsearch-master-1     0/1     Init:CrashLoopBackOff   8          21m    10.233.92.5    .....-node-01   <none>           <none>
elasticsearch-master-2   1/1     Init:CrashLoopBackOff   8          21m   10.233.80.4    .....-node-03   <none>           <none>
$ helm get elasticsearch
REVISION: 1
RELEASED: Mon Sep 23 13:43:17 2019
CHART: elasticsearch-7.3.2
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: elasticsearch
esConfig:
  elasticsearch.yml: |
    xpack:
      security:
        enabled: false
esJavaOpts: -Xmx2g -Xms2g
esMajorVersion: ""
extraEnvs: []
extraInitContainers: ""
extraVolumeMounts: ""
extraVolumes: ""
fsGroup: ""
fullnameOverride: ""
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.3.2
ingress:
  annotations: {}
  enabled: false
  hosts:
  - chart-example.local
  path: /
  tls: []
initResources: {}
keystore:
- secretName: elastic-config-slack
labels: {}
lifecycle: {}
masterService: ""
masterTerminationFix: false
maxUnavailable: 1
minimumMasterNodes: 2
nameOverride: ""
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector: {}
persistence:
  annotations: {}
  enabled: true
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000
podSecurityPolicy:
  create: false
  name: ""
  spec:
    fsGroup:
      rule: RunAsAny
    privileged: true
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
    - secret
    - configMap
    - persistentVolumeClaim
priorityClassName: ""
protocol: http
rbac:
  create: false
  serviceAccountName: ""
readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 5
replicas: 3
resources:
  limits:
    cpu: 1000m
    memory: 5Gi
  requests:
    cpu: 100m
    memory: 2Gi
roles:
  data: "true"
  ingest: "true"
  master: "true"
schedulerName: ""
secretMounts: []
securityContext:
  capabilities:
    drop:
    - ALL
  runAsNonRoot: true
  runAsUser: 1000
service:
  annotations: {}
  httpPortName: http
  nodePort: 30998
  transportPortName: transport
  type: NodePort
sidecarResources: {}
sysctlInitContainer:
  enabled: true
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
transportPort: 9300
updateStrategy: RollingUpdate
volumeClaimTemplate:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: elkdops05sc

HOOKS:
---
# elasticsearch-jrwvt-test
apiVersion: v1
kind: Pod
metadata:
  name: "elasticsearch-jrwvt-test"
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
  - name: "elasticsearch-uwrcd-test"
    image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.2"
    command:
      - "sh"
      - "-c"
      - |
        #!/usr/bin/env bash -e
        curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s'
  restartPolicy: Never
MANIFEST:

---
# Source: elasticsearch/templates/poddisruptionbudget.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: "elasticsearch-master-pdb"
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: "elasticsearch-master"
---
# Source: elasticsearch/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: elasticsearch-master-config
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
data:
  elasticsearch.yml: |
    xpack:
      security:
        enabled: false
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-master
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    {}

spec:
  type: NodePort
  selector:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  ports:
  - name: http
    protocol: TCP
    port: 9200
    nodePort: 30998
  - name: transport
    protocol: TCP
    port: 9300
---
# Source: elasticsearch/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch-master-headless
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
  # Create endpoints also if the related pod isn't ready
  publishNotReadyAddresses: true
  selector:
    app: "elasticsearch-master"
  ports:
  - name: http
    port: 9200
  - name: transport
    port: 9300
---
# Source: elasticsearch/templates/statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: elasticsearch-master
  labels:
    heritage: "Tiller"
    release: "elasticsearch"
    chart: "elasticsearch"
    app: "elasticsearch-master"
  annotations:
    esMajorVersion: "7"
spec:
  serviceName: elasticsearch-master-headless
  selector:
    matchLabels:
      app: "elasticsearch-master"
  replicas: 3
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-master
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: elkdops05sc

  template:
    metadata:
      name: "elasticsearch-master"
      labels:
        heritage: "Tiller"
        release: "elasticsearch"
        chart: "elasticsearch"
        app: "elasticsearch-master"
      annotations:

        configchecksum: 52f70be383267d2de2f36a6ba60c556c9542a826e0c5d1f8f886fdbb8230050
    spec:
      securityContext:
        fsGroup: 1000
        runAsUser: 1000

      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - "elasticsearch-master"
            topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 120
      volumes:
        - name: esconfig
          configMap:
            name: elasticsearch-master-config
        - name: keystore
          emptyDir: {}
        - name: keystore-elastic-config-slack
          secret: 
            secretName: elastic-config-slack

      initContainers:
      - name: configure-sysctl
        securityContext:
          runAsUser: 0
          privileged: true
        image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.2"
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        resources:
          {}

      - name: keystore
        image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.2"
        command:
        - sh
        - -c
        - |
          #!/usr/bin/env bash
          set -euo pipefail

          elasticsearch-keystore create

          for i in /tmp/keystoreSecrets/*/*; do
            key=$(basename $i)
            echo "Adding file $i to keystore key $key"
            elasticsearch-keystore add-file "$key" "$i"
          done

          # Add the bootstrap password since otherwise the Elasticsearch entrypoint tries to do this on startup
          [ ! -z "$ELASTIC_PASSWORD" ] && echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x bootstrap.password

          cp -a /usr/share/elasticsearch/config/elasticsearch.keystore /tmp/keystore/
        env: 
          []

        resources: 
          {}

        volumeMounts:
          - name: keystore
            mountPath: /tmp/keystore
          - name: keystore-elastic-config-slack
            mountPath: /tmp/keystoreSecrets/elastic-config-slack

      containers:
      - name: "elasticsearch"
        securityContext:
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          runAsUser: 1000

        image: "docker.elastic.co/elasticsearch/elasticsearch:7.3.2"
        imagePullPolicy: "IfNotPresent"
        readinessProbe:
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 3
          timeoutSeconds: 5

          exec:
            command:
              - sh
              - -c
              - |
                #!/usr/bin/env bash -e
                # If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
                # Once it has started only check that the node itself is responding
                START_FILE=/tmp/.es_start_file

                http () {
                    local path="${1}"
                    if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                      BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                    else
                      BASIC_AUTH=''
                    fi
                    curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
                }

                if [ -f "${START_FILE}" ]; then
                    echo 'Elasticsearch is already running, lets check the node is healthy'
                    http "/"
                else
                    echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
                    if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
                        touch ${START_FILE}
                        exit 0
                    else
                        echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
                        exit 1
                    fi
                fi
        ports:
        - name: http
          containerPort: 9200
        - name: transport
          containerPort: 9300
        resources:
          limits:
            cpu: 1000m
            memory: 5Gi
          requests:
            cpu: 100m
            memory: 2Gi

        env:
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: cluster.initial_master_nodes
            value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,"
          - name: discovery.seed_hosts
            value: "elasticsearch-master-headless"
          - name: cluster.name
            value: "elasticsearch"
          - name: network.host
            value: "0.0.0.0"
          - name: ES_JAVA_OPTS
            value: "-Xmx2g -Xms2g"
          - name: node.data
            value: "true"
          - name: node.ingest
            value: "true"
          - name: node.master
            value: "true"
        volumeMounts:
          - name: "elasticsearch-master"
            mountPath: /usr/share/elasticsearch/data

          - name: keystore
            mountPath: /usr/share/elasticsearch/config/elasticsearch.keystore
            subPath: elasticsearch.keystore

          - name: esconfig
            mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
            subPath: elasticsearch.yml
Crazybus commented 5 years ago

@dcvtruong Which command did you use to create the secret? And can you also make sure you are looking at the logs from the initContainer and not the main pod? The command should be something like kubectl logs elasticsearch-master-0 -c init. If the secret hasn't been created correctly then it is expected that the init container will fail with an error.

Could you also try running the example from here: https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/config just to be sure its not something funky with your secret? Running make from this directory will be enough to deploy and test it.

dcvtruong commented 5 years ago

@Crazybus I use the following command to create the secret,

kubectl create secret generic elastic-config-slack --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/..../..../....'

values.yaml:

keystore: 
   - secretName: elastic-config-slack

I only see one container (elasticsearch) and did not see the initContainer to retrieve the log.

BTW, running from the config folder failed with exec permission. I cannot access to verify the keystore list.

$ make
kubectl delete secret elastic-config-credentials elastic-config-secret elastic-config-slack elastic-config-custom-path || true
Error from server (NotFound): secrets "elastic-config-credentials" not found
Error from server (NotFound): secrets "elastic-config-secret" not found
Error from server (NotFound): secrets "elastic-config-slack" not found
Error from server (NotFound): secrets "elastic-config-custom-path" not found
kubectl create secret generic elastic-credentials --from-literal=password=changeme --from-literal=username=elastic
secret/elastic-credentials created
kubectl create secret generic elastic-config-slack --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
secret/elastic-config-slack created
kubectl create secret generic elastic-config-secret --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
secret/elastic-config-secret created
kubectl create secret generic elastic-config-custom-path --from-literal=slack_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd' --from-literal=thing_i_don_tcare_about=test
secret/elastic-config-custom-path created
helm upgrade --wait --timeout=600 --install helm-es-config --values ./values.yaml ../../
Release "helm-es-config" does not exist. Installing it now.
E0923 18:42:44.207005  105061 portforward.go:316] error copying from local connection to remote stream: read tcp4 127.0.0.1:45588->127.0.0.1:49518: read: connection reset by peer
NAME:   helm-es-config
LAST DEPLOYED: Mon Sep 23 18:40:51 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/PodDisruptionBudget
NAME               MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
config-master-pdb  N/A            1                1                    112s

==> v1/ConfigMap
NAME                  DATA  AGE
config-master-config  1     112s

==> v1/Service
NAME                    TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)                        AGE
config-master           NodePort   10.233.12.228  <none>       9200:30998/TCP,9300:31813/TCP  112s
config-master-headless  ClusterIP  None           <none>       9200/TCP,9300/TCP              112s

==> v1beta1/StatefulSet
NAME           DESIRED  CURRENT  AGE
config-master  1        1        112s

==> v1/Pod(related)
NAME             READY  STATUS   RESTARTS  AGE
config-master-0  1/1    Running  0         112s

NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=default -l app=config-master -w
2. Test cluster health using Helm test.
  $ helm test helm-es-config

GOSS_CONTAINER=$(kubectl get --no-headers=true pods -l release=helm-es-config -o custom-columns=:metadata.name | sed -n 1p ) && \
echo Testing with pod: $GOSS_CONTAINER && \
kubectl cp test/goss.yaml $GOSS_CONTAINER:/tmp/goss.yaml && \
kubectl exec $GOSS_CONTAINER -- sh -c "cd /tmp/ && curl -s -L https://github.com/aelsabbahy/goss/releases/download/v0.3.6/goss-linux-amd64 -o goss && chmod +rx ./goss && ./goss --gossfile goss.yaml validate --retry-timeout 300s --sleep 5s --color --format documentation"
Testing with pod: config-master-0
Error from server (Forbidden): pods "config-master-0" is forbidden: cannot exec into or attach to a privileged container
make: *** [goss] Error 1
ravishivt commented 5 years ago

I had the same problem. There's a bug with the keystore initContainer that is failing whenever custom keystore settings are defined. A proposed fix is in #301.

dcvtruong commented 5 years ago

@ravishivt How were you able to debug the keystore error? I was not able to see the error from the pod?

Crazybus commented 5 years ago

I had the same problem. There's a bug with the keystore initContainer that is failing whenever custom keystore settings are defined. A proposed fix is in #301.

Just to clarify on this one (and explain why the automated integration test still passes) that this bug only affects anybody that doesn't have security enabled (if ELASTIC_PASSWORD is not set which it is in the automated test).

@ravishivt How were you able to debug the keystore error? I was not able to see the error from the pod?

Do you get an error when trying to get the logs? I suggested running the command like this in my previous reply.

kubectl logs elasticsearch-master-0 -c init
dcvtruong commented 5 years ago

@crazybus Not able to get logs from the initContainer

NAMESPACE     NAME                                                      READY   STATUS     RESTARTS   AGE     IP              NODE                              NOMINATED NODE   READINESS GATES
elk           elasticsearch-master-0                                    0/1     Init:1/2   1          4m13s   10.233.80.7     .....-node-03     <none>           <none>
elk           elasticsearch-master-1                                    0/1     Init:1/2   1          4m13s   10.233.100.24   .....-node-02     <none>           <none>
elk           elasticsearch-master-2                                    0/1     Init:1/2   1          4m13s   10.233.92.22    .....-node-01     <none>           <none>
$ kubectl logs elasticsearch-master-0 -n elk -c init
Error from server (BadRequest): container init is not valid for pod elasticsearch-master-0
ravishivt commented 5 years ago

Just to clarify on this one (and explain why the automated integration test still passes) that this bug only affects anybody that doesn't have security enabled (if ELASTIC_PASSWORD is not set which it is in the automated test).

Ack, makes sense!

Not able to get logs from the initContainer

Try keystore instead of init for the container name. See if you get the same log errors as I did in #301.

kubectl logs elasticsearch-master-0 -c keystore

botelastic[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

botelastic[bot] commented 4 years ago

This issue has been automatically closed because it has not had recent activity since being marked as stale.