hashicorp / vault-helm

Helm chart to install Vault and other associated components.
Mozilla Public License 2.0
1.07k stars 872 forks source link

Pending status #449

Open itsecforu opened 3 years ago

itsecforu commented 3 years ago

Hey! How to correnct setup PV for vault-0 pod in HA mode?

i get:


NAME                                    READY   STATUS    RESTARTS   AGE
vault-0                                 0/1     Pending   0          2m28s
vault-1                                 0/1     Running   0          7m47s
vault-2                                 0/1     Running   0          6m32s
vault-agent-injector-544f8c5f4f-x5xxd   1/1     Running   0          10m```

my values:

```# Available parameters and their default values for the Vault chart.

global:
  # enabled is the master enabled switch. Setting this to true or false
  # will enable or disable all the components within this chart by default.
  enabled: true
  # Image pull secret to use for registry authentication.
  imagePullSecrets: []
  # imagePullSecrets:
  #   - name: image-pull-secret
  # TLS for end-to-end encrypted transport
  tlsDisable: true
  # If deploying to OpenShift
  openshift: false
  # Create PodSecurityPolicy for pods
  psp:
    enable: false
    # Annotation for PodSecurityPolicy.
    # This is a multi-line templated string map, and can also be set as YAML.
    annotations: |
      seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
      apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
      seccomp.security.alpha.kubernetes.io/defaultProfileName:  runtime/default
      apparmor.security.beta.kubernetes.io/defaultProfileName:  runtime/default

injector:
  # True if you want to enable vault agent injection.
  enabled: true

  # If true, will enable a node exporter metrics endpoint at /metrics.
  metrics:
    enabled: false

  # External vault server address for the injector to use. Setting this will
  # disable deployment of a vault server along with the injector.
  externalVaultAddr: ""

  # image sets the repo and tag of the vault-k8s image to use for the injector.
  image:
    repository: "hashicorp/vault-k8s"
    tag: "0.6.0"
    pullPolicy: IfNotPresent

  # agentImage sets the repo and tag of the Vault image to use for the Vault Agent
  # containers.  This should be set to the official Vault image.  Vault 1.3.1+ is
  # required.
  agentImage:
    repository: "vault"
    tag: "1.5.4"

  # Mount Path of the Vault Kubernetes Auth Method.
  authPath: "auth/kubernetes"

  # Configures the log verbosity of the injector. Supported log levels: Trace, Debug, Error, Warn, Info
  logLevel: "info"

  # Configures the log format of the injector. Supported log formats: "standard", "json".
  logFormat: "standard"

  # Configures all Vault Agent sidecars to revoke their token when shutting down
  revokeOnShutdown: false

  # namespaceSelector is the selector for restricting the webhook to only
  # specific namespaces.
  # See https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-namespaceselector
  # for more details.
  # Example:
  # namespaceSelector:
  #    matchLabels:
  #      sidecar-injector: enabled
  namespaceSelector: {}

  # Configures failurePolicy of the webhook. By default webhook failures are ignored.
  # To block pod creation while webhook is unavailable, set the policy to `Fail` below.
  # See https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#failure-policy
  #
  # failurePolcy: Fail

  certs:
    # secretName is the name of the secret that has the TLS certificate and
    # private key to serve the injector webhook. If this is null, then the
    # injector will default to its automatic management mode that will assign
    # a service account to the injector to generate its own certificates.
    secretName: null

    # caBundle is a base64-encoded PEM-encoded certificate bundle for the
    # CA that signed the TLS certificate that the webhook serves. This must
    # be set if secretName is non-null.
    caBundle: ""

    # certName and keyName are the names of the files within the secret for
    # the TLS cert and private key, respectively. These have reasonable
    # defaults but can be customized if necessary.
    certName: tls.crt
    keyName: tls.key

  resources: {}
  # resources:
  #   requests:
  #     memory: 256Mi
  #     cpu: 250m
  #   limits:
  #     memory: 256Mi
  #     cpu: 250m

  # extraEnvironmentVars is a list of extra environment variables to set in the
  # injector deployment.
  extraEnvironmentVars: {}
    # KUBERNETES_SERVICE_HOST: kubernetes.default.svc

  # Affinity Settings for injector pods
  # This should be a multi-line string matching the affinity section of a
  # PodSpec.
  affinity: null

  # Toleration Settings for injector pods
  # This should be a multi-line string matching the Toleration array
  # in a PodSpec.
  tolerations: null

  # nodeSelector labels for injector pod assignment, formatted as a muli-line string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # Example:
  # nodeSelector: |
  #   beta.kubernetes.io/arch: amd64
  nodeSelector: null

  # Priority class for injector pods
  priorityClassName: ""

  # Extra annotations to attach to the injector pods
  # This can either be YAML or a YAML-formatted multi-line templated string map
  # of the annotations to apply to the injector pods
  annotations: {}

server:
  # Resource requests, limits, etc. for the server cluster placement. This
  # should map directly to the value of the resources field for a PodSpec.
  # By default no direct resource request is made.

  image:
    repository: "vault"
    tag: "1.5.4"
    # Overrides the default Image Pull Policy
    pullPolicy: IfNotPresent

  # Configure the Update Strategy Type for the StatefulSet
  # See https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
  updateStrategyType: "OnDelete"

  resources: {}
  # resources:
  #   requests:
  #     memory: 256Mi
  #     cpu: 250m
  #   limits:
  #     memory: 256Mi
  #     cpu: 250m

  # Ingress allows ingress services to be created to allow external access
  # from Kubernetes to access Vault pods.
  # If deployment is on OpenShift, the following block is ignored.
  # In order to expose the service, use the route section below
  ingress:
    enabled: false
    labels: {}
      # traffic: external
    annotations: {}
      # |
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
      #   or
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    hosts:
      - host: chart-example.local
        paths: []

    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - chart-example.local

  # OpenShift only - create a route to expose the service
  # The created route will be of type passthrough
  route:
    enabled: false
    labels: {}
    annotations: {}
    host: chart-example.local

  # authDelegator enables a cluster role binding to be attached to the service
  # account.  This cluster role binding can be used to setup Kubernetes auth
  # method.  https://www.vaultproject.io/docs/auth/kubernetes.html
  authDelegator:
    enabled: true

  # extraInitContainers is a list of init containers. Specified as a YAML list.
  # This is useful if you need to run a script to provision TLS certificates or
  # write out configuration files in a dynamic way.
  extraInitContainers: null
    # # This example installs a plugin pulled from github into the /usr/local/libexec/vault/oauthapp folder,
    # # which is defined in the volumes value.
    # - name: oauthapp
    #   image: "alpine"
    #   command: [sh, -c]
    #   args:
    #     - cd /tmp &&
    #       wget https://github.com/puppetlabs/vault-plugin-secrets-oauthapp/releases/download/v1.2.0/vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64.tar.xz -O oauthapp.xz &&
    #       tar -xf oauthapp.xz &&
    #       mv vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64 /usr/local/libexec/vault/oauthapp &&
    #       chmod +x /usr/local/libexec/vault/oauthapp
    #   volumeMounts:
    #     - name: plugins
    #       mountPath: /usr/local/libexec/vault

  # extraContainers is a list of sidecar containers. Specified as a YAML list.
  extraContainers: null

  # shareProcessNamespace enables process namespace sharing between Vault and the extraContainers
  # This is useful if Vault must be signaled, e.g. to send a SIGHUP for log rotation
  shareProcessNamespace: false

  # extraArgs is a string containing additional Vault server arguments.
  extraArgs: ""

  # Used to define custom readinessProbe settings
  readinessProbe:
    enabled: true
    # If you need to use a http path instead of the default exec
    # path: /v1/sys/health?standbyok=true

    # When a probe fails, Kubernetes will try failureThreshold times before giving up
    failureThreshold: 2
    # Number of seconds after the container has started before probe initiates
    initialDelaySeconds: 5
    # How often (in seconds) to perform the probe
    periodSeconds: 5
    # Minimum consecutive successes for the probe to be considered successful after having failed
    successThreshold: 1
    # Number of seconds after which the probe times out.
    timeoutSeconds: 3
  # Used to enable a livenessProbe for the pods
  livenessProbe:
    enabled: false
    path: "/v1/sys/health?standbyok=true"
    # When a probe fails, Kubernetes will try failureThreshold times before giving up
    failureThreshold: 2
    # Number of seconds after the container has started before probe initiates
    initialDelaySeconds: 60
    # How often (in seconds) to perform the probe
    periodSeconds: 5
    # Minimum consecutive successes for the probe to be considered successful after having failed
    successThreshold: 1
    # Number of seconds after which the probe times out.
    timeoutSeconds: 3

  # Used to set the sleep time during the preStop step
  preStopSleepSeconds: 5

  # Used to define commands to run after the pod is ready.
  # This can be used to automate processes such as initialization
  # or boostrapping auth methods.
  postStart: []
  # - /bin/sh
  # - -c
  # - /vault/userconfig/myscript/run.sh

  # extraEnvironmentVars is a list of extra environment variables to set with the stateful set. These could be
  # used to include variables required for auto-unseal.
  extraEnvironmentVars: {}
    # GOOGLE_REGION: global
    # GOOGLE_PROJECT: myproject
    # GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/myproject/myproject-creds.json

  # extraSecretEnvironmentVars is a list of extra environment variables to set with the stateful set.
  # These variables take value from existing Secret objects.
  extraSecretEnvironmentVars: []
    # - envName: AWS_SECRET_ACCESS_KEY
    #   secretName: vault
    #   secretKey: AWS_SECRET_ACCESS_KEY

  # extraVolumes is a list of extra volumes to mount. These will be exposed
  # to Vault in the path `/vault/userconfig/<name>/`. The value below is
  # an array of objects, examples are shown below.
  extraVolumes: []
    # - type: secret (or "configMap")
    #   name: my-secret
    #   path: null # default is `/vault/userconfig`

  # volumes is a list of volumes made available to all containers. These are rendered
  # via toYaml rather than pre-processed like the extraVolumes value.
  # The purpose is to make it easy to share volumes between containers.
  volumes: null
  #   - name: plugins
  #     emptyDir: {}

  # volumeMounts is a list of volumeMounts for the main server container. These are rendered
  # via toYaml rather than pre-processed like the extraVolumes value.
  # The purpose is to make it easy to share volumes between containers.
  volumeMounts: null
  #   - mountPath: /usr/local/libexec/vault
  #     name: plugins
  #     readOnly: true

  # Affinity Settings
  # Commenting out or setting as empty the affinity variable, will allow
  # deployment to single node services such as Minikube
  affinity: |
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/name: {{ template "vault.name" . }}
              app.kubernetes.io/instance: "{{ .Release.Name }}"
              component: server
          topologyKey: kubernetes.io/hostname

  # Toleration Settings for server pods
  # This should be a multi-line string matching the Toleration array
  # in a PodSpec.
  tolerations: null

  # nodeSelector labels for server pod assignment, formatted as a muli-line string.
  # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  # Example:
  # nodeSelector: |
  #   beta.kubernetes.io/arch: amd64
  nodeSelector: null

  # Enables network policy for server pods
  networkPolicy:
    enabled: false

  # Priority class for server pods
  priorityClassName: ""

  # Extra labels to attach to the server pods
  # This should be a YAML map of the labels to apply to the server pods
  extraLabels: {}

  # Extra annotations to attach to the server pods
  # This can either be YAML or a YAML-formatted multi-line templated string map
  # of the annotations to apply to the server pods
  annotations: {}

  # Enables a headless service to be used by the Vault Statefulset
  service:
    enabled: true
    # clusterIP controls whether a Cluster IP address is attached to the
    # Vault service within Kubernetes.  By default the Vault service will
    # be given a Cluster IP address, set to None to disable.  When disabled
    # Kubernetes will create a "headless" service.  Headless services can be
    # used to communicate with pods directly through DNS instead of a round robin
    # load balancer.
    # clusterIP: None

    # Configures the service type for the main Vault service.  Can be ClusterIP
    # or NodePort.
    #type: ClusterIP

    # If type is set to "NodePort", a specific nodePort value can be configured,
    # will be random if left blank.
    #nodePort: 30000

    # Port on which Vault server is listening
    port: 8200
    # Target port to which the service should be mapped to
    targetPort: 8200
    # Extra annotations for the service definition. This can either be YAML or a
    # YAML-formatted multi-line templated string map of the annotations to apply
    # to the service.
    annotations: {}

  # This configures the Vault Statefulset to create a PVC for data
  # storage when using the file or raft backend storage engines.
  # See https://www.vaultproject.io/docs/configuration/storage/index.html to know more
  dataStorage:
    enabled: true
    # Size of the PVC created
    size: 10Gi
    # Location where the PVC will be mounted.
    mountPath: "/vault/data"
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: local-storage
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce
    # Annotations to apply to the PVC
    annotations: {}

  # This configures the Vault Statefulset to create a PVC for audit
  # logs.  Once Vault is deployed, initialized and unseal, Vault must
  # be configured to use this for audit logs.  This will be mounted to
  # /vault/audit
  # See https://www.vaultproject.io/docs/audit/index.html to know more
  auditStorage:
    enabled: false
    # Size of the PVC created
    size: 10Gi
    # Location where the PVC will be mounted.
    mountPath: "/vault/audit"
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: local-storage
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce
    # Annotations to apply to the PVC
    annotations: {}

  # Run Vault in "dev" mode. This requires no further setup, no state management,
  # and no initialization. This is useful for experimenting with Vault without
  # needing to unseal, store keys, et. al. All data is lost on restart - do not
  # use dev mode for anything other than experimenting.
  # See https://www.vaultproject.io/docs/concepts/dev-server.html to know more
  dev:
    enabled: false

  # Run Vault in "standalone" mode. This is the default mode that will deploy if
  # no arguments are given to helm. This requires a PVC for data storage to use
  # the "file" backend.  This mode is not highly available and should not be scaled
  # past a single replica.
  standalone:
    enabled: "-"

    # config is a raw string of default configuration when using a Stateful
    # deployment. Default is to use a PersistentVolumeClaim mounted at /vault/data
    # and store data there. This is only used when using a Replica count of 1, and
    # using a stateful set. This should be HCL.

    # Note: Configuration files are stored in ConfigMaps so sensitive data
    # such as passwords should be either mounted through extraSecretEnvironmentVars
    # or through a Kube secret.  For more information see:
    # https://www.vaultproject.io/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "file" {
        path = "/vault/data"
      }

      # Example configuration for using auto-unseal, using Google Cloud KMS. The
      # GKMS keys must already exist, and the cluster must have a service account
      # that is authorized to access GCP KMS.
      #seal "gcpckms" {
      #   project     = "vault-helm-dev"
      #   region      = "global"
      #   key_ring    = "vault-helm-unseal-kr"
      #   crypto_key  = "vault-helm-unseal-key"
      #}

  # Run Vault in "HA" mode. There are no storage requirements unless audit log
  # persistence is required.  In HA mode Vault will configure itself to use Consul
  # for its storage backend.  The default configuration provided will work the Consul
  # Helm project by default.  It is possible to manually configure Vault to use a
  # different HA backend.
  ha:
    enabled: true
    replicas: 3

    # Set the api_addr configuration for Vault HA
    # See https://www.vaultproject.io/docs/configuration#api_addr
    # If set to null, this will be set to the Pod IP Address
    apiAddr: null

    # Enables Vault's integrated Raft storage.  Unlike the typical HA modes where
    # Vault's persistence is external (such as Consul), enabling Raft mode will create
    # persistent volumes for Vault to store data according to the configuration under server.dataStorage.
    # The Vault cluster will coordinate leader elections and failovers internally.
    raft:

      # Enables Raft integrated storage
      enabled: true
      # Set the Node Raft ID to the name of the pod
      setNodeId: false

      # Note: Configuration files are stored in ConfigMaps so sensitive data
      # such as passwords should be either mounted through extraSecretEnvironmentVars
      # or through a Kube secret.  For more information see:
      # https://www.vaultproject.io/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations
      config: |
        ui = true

        listener "tcp" {
          tls_disable = 1
          address = "[::]:8200"
          cluster_address = "[::]:8201"
        }

        storage "raft" {
          path = "/vault/data"
        }

        service_registration "kubernetes" {}

    # config is a raw string of default configuration when using a Stateful
    # deployment. Default is to use a Consul for its HA storage backend.
    # This should be HCL.

    # Note: Configuration files are stored in ConfigMaps so sensitive data
    # such as passwords should be either mounted through extraSecretEnvironmentVars
    # or through a Kube secret.  For more information see:
    # https://www.vaultproject.io/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }

      service_registration "kubernetes" {}

      # Example configuration for using auto-unseal, using Google Cloud KMS. The
      # GKMS keys must already exist, and the cluster must have a service account
      # that is authorized to access GCP KMS.
      #seal "gcpckms" {
      #   project     = "vault-helm-dev-246514"
      #   region      = "global"
      #   key_ring    = "vault-helm-unseal-kr"
      #   crypto_key  = "vault-helm-unseal-key"
      #}

    # A disruption budget limits the number of pods of a replicated application
    # that are down simultaneously from voluntary disruptions
    disruptionBudget:
      enabled: true

    # maxUnavailable will default to (n/2)-1 where n is the number of
    # replicas. If you'd like a custom value, you can specify an override here.
      maxUnavailable: null

  # Definition of the serviceAccount used to run Vault.
  # These options are also used when using an external Vault server to validate
  # Kubernetes tokens.
  serviceAccount:
    # Specifies whether a service account should be created
    create: true
    # The name of the service account to use.
    # If not set and create is true, a name is generated using the fullname template
    name: ""
    # Extra annotations for the serviceAccount definition. This can either be
    # YAML or a YAML-formatted multi-line templated string map of the
    # annotations to apply to the serviceAccount.
    annotations: {}

  # Settings for the statefulSet used to run Vault.
  statefulSet:
    # Extra annotations for the statefulSet. This can either be YAML or a
    # YAML-formatted multi-line templated string map of the annotations to apply
    # to the statefulSet.
    annotations: {}

# Vault UI
ui:
  # True if you want to create a Service entry for the Vault UI.
  #
  # serviceType can be used to control the type of service created. For
  # example, setting this to "LoadBalancer" will create an external load
  # balancer (for supported K8S installations) to access the UI.
  enabled: true
  publishNotReadyAddresses: true
  # The service should only contain selectors for active Vault pod
  activeVaultPodOnly: false
  serviceType: "ClusterIP"
  serviceNodePort: null
  externalPort: 8200

  # loadBalancerSourceRanges:
  #   - 10.0.0.0/16
  #   - 1.78.23.3/32

  # loadBalancerIP:

  # Extra annotations to attach to the ui service
  # This can either be YAML or a YAML-formatted multi-line templated string map
  # of the annotations to apply to the ui service
  annotations: {} ```

i noticed that if i put **true** on **dev** mode -its works but no cluster and i cant get unseal keys.

Plz help me to fix it.
itsecforu commented 3 years ago

nobody? :-(

tvoran commented 3 years ago

Hi @itsecforu, I'd suggest running kubectl describe on pod vault-0, and on the pvc data-vault-0 to see if the Events say why it's still in a pending state.

Keep in mind that uninstalling the helm chart doesn't delete pvc's, so you may want to manually delete those to start fresh:

kubectl delete pvc data-vault-0 data-vault-1 data-vault-2

And also also encourage you to bring this type of question to our discuss forum, since they'll be seen by a wider audience there.

itsecforu commented 3 years ago
kubectl describe pod vault-0 -n vault
Name:           vault-0
Namespace:      vault
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=vault
                app.kubernetes.io/name=vault
                component=server
                controller-revision-hash=vault-86d855d95d
                helm.sh/chart=vault-0.8.0
                statefulset.kubernetes.io/pod-name=vault-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/vault
Containers:
  vault:
    Image:       vault:1.5.4
    Ports:       8200/TCP, 8201/TCP, 8202/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /bin/sh
      -ec
    Args:
      cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
      [ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfi                                                                              g.hcl;
      [ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.h                                                                              cl;
      [ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageco                                                                              nfig.hcl;
      [ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageco                                                                              nfig.hcl;
      [ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /t                                                                              mp/storageconfig.hcl;
      [ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storag                                                                              econfig.hcl;
      /usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfi                                                                              g.hcl

    Readiness:  exec [/bin/sh -ec vault status -tls-skip-verify] delay=5s timeou                                                                              t=3s period=5s #success=1 #failure=2
    Environment:
      HOST_IP:               (v1:status.hostIP)
      POD_IP:                (v1:status.podIP)
      VAULT_K8S_POD_NAME:   vault-0 (v1:metadata.name)
      VAULT_K8S_NAMESPACE:  vault (v1:metadata.namespace)
      VAULT_ADDR:           http://127.0.0.1:8200
      VAULT_API_ADDR:       http://$(POD_IP):8200
      SKIP_CHOWN:           true
      SKIP_SETCAP:          true
      HOSTNAME:             vault-0 (v1:metadata.name)
      VAULT_CLUSTER_ADDR:   https://$(HOSTNAME).vault-internal:8201
      HOME:                 /home/vault
    Mounts:
      /home/vault from home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from vault-token-99g68 (ro)
      /vault/config from config (rw)
      /vault/data from data (rw)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in                                                                               the same namespace)
    ClaimName:  data-vault-0
    ReadOnly:   false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vault-config
    Optional:  false
  home:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  vault-token-99g68:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  vault-token-99g68
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  6d    default-scheduler  0/7 nodes are available: 7                                                                               pod has unbound immediate PersistentVolumeClaims.
itsecforu commented 3 years ago
kubectl get pvc -n vault
NAME           STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS    AGE
data-vault-0   Pending                                                       60d
data-vault-1   Bound     vault     10Gi       RWO            local-storage   6d
data-vault-2   Bound     consul1   20Gi       RWO            local-storage   6d
itsecforu commented 3 years ago

after deleting data-vault-0 pod still pending. What do I need to setup in values?

itsecforu commented 3 years ago

Guys plz help me to correct install this cluster

FrederikNJS commented 3 years ago
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  6d    default-scheduler  0/7 nodes are available: 7 pod has unbound immediate PersistentVolumeClaims

This tells you that for some reason the PersistentVolumeClaim cannot be created. This doesn't have anything to do with vault, and instead only something to do with your setup for PersistentVolumeClaims