hashicorp / vault-helm

Helm chart to install Vault and other associated components.
Mozilla Public License 2.0
1.07k stars 872 forks source link

Vault refusing to connect to consul while security features are enabled #334

Open iamaverrick opened 4 years ago

iamaverrick commented 4 years ago

Thank you for finally making an official Hasicorp Vault helm chart we have been using this service for some time and we can now deploy on our k8s cluster using this helm chart. which brings me to a couple of issues when deploying the service.

we are using consul as vault storage using this helm chart https://github.com/hashicorp/consul-helm which we are running in a production environment and needed to enable gossipEncryption, tls, and acls for securing consul. when we deployed this service vault refused the connection and im confused as to how to provide the necessary environment variables so the services can work together.

consul configuration

global:
  name: consul
  datacenter: dc1
  enablePodSecurityPolicies: true

  # gossipEncryption configures which Kubernetes secret to retrieve Consul's
  # gossip encryption key from (see https://www.consul.io/docs/agent/options.html#_encrypt).
  # If secretName or secretKey are not set, gossip encryption will not be enabled.
  # The secret must be in the same namespace that Consul is installed into.
  #
  # The secret can be created by running:
  #    kubectl create secret generic consul-gossip-encryption-key \
  #      --from-literal=key=$(consul keygen).
  #
  # In this case, secretName would be "consul-gossip-encryption-key" and
  # secretKey would be "key".
  gossipEncryption:
    # secretName is the name of the Kubernetes secret that holds the gossip
    # encryption key. The secret must be in the same namespace that Consul is installed into.
    secretName: "consul-gossip-encryption-key"
    # secretKey is the key within the Kubernetes secret that holds the gossip
    # encryption key.
    secretKey: "key"

  # Enables TLS encryption across the cluster to verify authenticity of the
  # servers and clients that connect. Note: It is HIGHLY recommended that you also
  # enable Gossip encryption.
  # See https://learn.hashicorp.com/consul/security-networking/agent-encryption
  #
  # Note: this relies on functionality introduced with Consul 1.4.1. Make sure
  # your global.image value is at least version 1.4.1.
  tls:
    enabled: true

    # enableAutoEncrypt turns on the auto-encrypt feature on
    # clients and servers.
    # It also switches consul-k8s components to retrieve the CA
    # from the servers via the API.
    # Requires Consul 1.7.1+ and consul-k8s 0.13.0
    enableAutoEncrypt: true

  # Configure ACLs.
  acls:

    # If true, the Helm chart will automatically manage ACL tokens and policies
    # for all Consul and consul-k8s components. This requires Consul >= 1.4 and consul-k8s >= 0.14.0.
    manageSystemACLs: true

    # bootstrapToken references a Kubernetes secret containing the bootstrap token to use
    # for creating policies and tokens for all Consul and consul-k8s components.
    # If set, we will skip ACL bootstrapping of the servers and will only initialize
    # ACLs for the Consul and consul-k8s system components.
    # Requires consul-k8s >= 0.14.0
    bootstrapToken:
      secretName: null
      secretKey: null

    # If true, an ACL token will be created that can be used in secondary
    # datacenters for replication. This should only be set to true in the
    # primary datacenter since the replication token must be created from that
    # datacenter.
    # In secondary datacenters, the secret needs to be imported from the primary
    # datacenter and referenced via global.acls.replicationToken.
    # Requires consul-k8s >= 0.13.0
    createReplicationToken: true

    # replicationToken references a secret containing the replication ACL token.
    # This token will be used by secondary datacenters to perform ACL replication
    # and create ACL tokens and policies.
    # This value is ignored if bootstrapToken is also set.
    # Requires consul-k8s >= 0.13.0
    replicationToken:
      secretName: null
      secretKey: null

  # Settings related to federating with another Consul datacenter.
  federation:
    # If enabled, this datacenter will be federation-capable. Only federation
    # through mesh gateways is supported.
    # Mesh gateways and servers will be configured to allow federation.
    # Requires global.tls.enabled, meshGateway.enabled and connectInject.enabled
    # to be true.
    # Requires Consul 1.8+.
    enabled: false

    # If true, the chart will create a Kubernetes secret that can be imported
    # into secondary datacenters so they can federate with this datacenter. The
    # secret contains all the information secondary datacenters need to contact
    # and authenticate with this datacenter. This should only be set to true
    # in your primary datacenter. The secret name is
    # <global.name>-federation (if setting global.name), otherwise
    # <helm-release-name>-consul-federation.
    # Requires consul-k8s 0.15.0+.
    createFederationSecret: false

# Server, when enabled, configures a server cluster to run. This should
# be disabled if you plan on connecting to a Consul cluster external to
# the Kube cluster.
server:
  replicas: 1
  bootstrapExpect: 1 # Should <= replicas count

  # storage and storageClass are the settings for configuring stateful
  # storage for the server pods. storage should be set to the disk size of
  # the attached volume. storageClass is the class of storage which defaults
  # to null (the Kube cluster will pick the default).
  storage: 10Gi
  storageClass: null

  # disruptionBudget enables the creation of a PodDisruptionBudget to
  # prevent voluntary degrading of the Consul server cluster.
  disruptionBudget:
    enabled: true

    # maxUnavailable will default to (n/2)-1 where n is the number of
    # replicas. If you'd like a custom value, you can specify an override here.
    maxUnavailable: null

client:
  enabled: true

ui:
  # True if you want to enable the Consul UI. The UI will run only
  # on the server nodes. This makes UI access via the service below (if
  # enabled) predictable rather than "any node" if you're running Consul
  # clients as well.
  enabled: "true"

  # True if you want to create a Service entry for the Consul UI.
  #
  # serviceType can be used to control the type of service created. For
  # example, setting this to "LoadBalancer" will create an external load
  # balancer (for supported K8S installations) to access the UI.
  service:
    enabled: true
    type: NodePort

    # Annotations to apply to the UI service.
    # Example:
    #   annotations: |
    #     "annotation-key": "annotation-value"
    annotations: null

    # Additional ServiceSpec values
    # This should be a multi-line string mapping directly to a Kubernetes
    # ServiceSpec object.
    additionalSpec: null

syncCatalog:
  enabled: true

connectInject:
  enabled: true

Vault Configuration

server:

  ingress:
    enabled: true
    labels:
      traffic: internal
    annotations:
      kubernetes.io/ingress.class: nginx
    hosts:
      - host: company.com
        paths: [/]

  # extraSecretEnvironmentVars is a list of extra enviroment variables to set with the stateful set.
  # These variables take value from existing Secret objects.
  extraSecretEnvironmentVars:
    - envName: AWS_ACCESS_KEY_ID
      secretName: vault-secrets
      secretKey: AWS_ACCESS_KEY_ID
    - envName: AWS_SECRET_ACCESS_KEY
      secretName: vault-secrets
      secretKey: AWS_SECRET_ACCESS_KEY
    - envName: AWS_REGION
      secretName: vault-secrets
      secretKey: AWS_REGION
    - envName: AWS_KMS_KEY_ID
      secretName: vault-secrets
      secretKey: AWS_KMS_KEY_ID
    - envName: AWS_KMS_ENDPOINT
      secretName: vault-secrets
      secretKey: AWS_KMS_ENDPOINT
    - envName: CONSUL_HTTP_TOKEN
      secretName: consul-client-acl-token
      secretKey: token
    - envName: CONSUL_CLIENT_KEY
      secretName: consul-ca-key
      secretKey: tls.key
    - envName: CONSUL_CLIENT_CERT
      secretName: consul-ca-cert
      secretKey: tls.crt

  # Enables a headless service to be used by the Vault Statefulset
  service:
    enabled: true
    # clusterIP controls whether a Cluster IP address is attached to the
    # Vault service within Kubernetes.  By default the Vault service will
    # be given a Cluster IP address, set to None to disable.  When disabled
    # Kubernetes will create a "headless" service.  Headless services can be
    # used to communicate with pods directly through DNS instead of a round robin
    # load balancer.
    # clusterIP: None

  dataStorage:
    enabled: true
    # Size of the PVC created
    size: 10Gi
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: "standard"

  ha:
    enabled: true
    replicas: 3

  # config is a raw string of default configuration when using a Stateful
  # deployment. Default is to use a Consul for its HA storage backend.
  # This should be HCL.

  # Note: Configuration files are stored in ConfigMaps so sensitive data
  # such as passwords should be either mounted through extraSecretEnvironmentVars
  # or through a Kube secret.  For more information see:
  # https://www.vaultproject.io/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations
    config: |
      ui = true
      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }
      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
        token = "CONSUL_HTTP_TOKEN"
        tls_ca_file = "CONSUL_CLIENT_KEY"
        tls_cert_file = "CONSUL_CLIENT_CERT"
      }
      service_registration "kubernetes" {}

# Vault UI
ui:
  # True if you want to create a Service entry for the Vault UI.
  #
  # serviceType can be used to control the type of service created. For
  # example, setting this to "LoadBalancer" will create an external load
  # balancer (for supported K8S installations) to access the UI.
  enabled: true
  serviceType: "ClusterIP"
  serviceNodePort: null
  externalPort: 80

  # loadBalancerSourceRanges:
  #   - 10.0.0.0/16
  #   - 1.78.23.3/32

  # loadBalancerIP:

  # Extra annotations to attach to the ui service
  # This can either be YAML or a YAML-formatted multi-line templated string map
  # of the annotations to apply to the ui service
  annotations: {}

#
#
#seal "awskms" {
#  region     = "AWS_REGION"
#  access_key = "AWS_ACCESS_KEY_ID"
#  secret_key = "AWS_SECRET_ACCESS_KEY"
#  kms_key_id = "AWS_KMS_KEY_ID"
#  endpoint   = "https://AWS_KMS_ENDPOINT"
#}
#                 tls_ca_file = "CONSUL_CACERT"
#                 tls_cert_file = "CONSUL_CLIENT_CERT" 

consul works well, and vault was working as well with consul prior to me enabling those features mentioned above. can anybody provide me we a solution to my issue.

im currently testing this in minikube we are deploying this on the same namespace and we are using secrets to hold all sensitive data. which is why we are trying to use environment variables pointing to the secrets.

carrchang commented 3 years ago

@iamaverrick You need to check the consul token you used, policy of token required is described in https://www.vaultproject.io/docs/configuration/storage/consul