Open chrisjholly opened 4 years ago
Hi @chrisjholly,
Once the Vault cluster is initialized and unsealed, Vault will automatically label its pod with the role in the cluster (either active or standby). It makes sense that when the cluster is new no active server exists yet.
I think we could make this configurable in case you need the ingress during initialization of the cluster. Would this help?
All of our Vault interaction uses ingress (including initialise/backup/restore) therefore having ingress up is vital, even if it is not initialised.
I think we could make this configurable in case you need the ingress during initialisation of the cluster. Would this help?
This would help. Are you suggesting that the new active service feature is behind a toggle or some other option that may allow the ingress to route to a standby pod when there is no active pod and then return to the active pod when ready?
@jasonodonnell Any progress on this?
In addition, vault-standby and vault-ui also don't contain any endpoints. In my case even after vault initialization and unsealing it's not changed.
@unitto1 it seems you forgot to add
service_registration "kubernetes" {}
to config and update to vault version with this setting available
I am running into the same problem as @unitto1. Both vault-active
and vault-standby
kubernetes service do not have endpoints for me because they are configured with label selector: vault-active: "true"
and vault-active: "false"
respectively (I have also attached the kubernetes service yaml for the vault-active
svc below). Same as @unitto1, for me, this is in an already running vault instance (i.e.: after vault initialization and unsealing). Since I am using ha setup, the ingress for ha points to the vault-active
service (see here), with service not having any endpoints that specific ingress does not work for me.
@kostyrev can you please point to docs about the config you're referencing as well as which vault version has this available, if possible?
I am using helm chart version v0.7.0
. Here are some relevant links:
vault-active-svc.yaml
I am using :
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: vault-concourse
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/instance: vault-concourse
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: vault
helm.sh/chart: vault-0.7.0
name: vault-concourse-active
namespace: default
spec:
clusterIP: 192.168.9.209
ports:
- name: http
port: 8200
- name: https-internal
port: 8201
publishNotReadyAddresses: true
selector:
app.kubernetes.io/instance: vault-concourse
app.kubernetes.io/name: vault
component: server
vault-active: "true"
I was able to answer and fix my own issue. Yes, @kostyrev's suggestion of adding this to the config did the trick:
service_registration "kubernetes" {}
This worked for me with helm chart version 0.7.0
, I did have to set this for the Kubernetes Service account for the service registration to work:
automountServiceAccountToken: true
Here are the docs I found useful:
@yashbhutwala and @kostyrev can you guys be more specific? As there is server.ha.config
and server.ha.raft.config
(if raft is being used) both as valid places for this service_registration "kubernetes" {}
line, where did you put it? I have it same as @chrisjholly only in server.ha.raft.config and it behaves the same (no endpoint behind ingress with sealed and un-initialized cluster)
EDIT: So issue I have is the same as OP, nothing behind active service with sealed/not initialized cluster. Pretty sure it worked up to 0.5.0 chart version.
Having the same problem. For a ha
enabled vault that was just deployed (meaning it is sealed) the ingress returns 503
has there are no vault-active=true
labeled pods. If i wish to unseal it via UI or CLI it is impossible since the ingress is down.
What is the procedure to initialize / unseal the vault other than using kubectl
?
Also what is the use case for the activeVaultPodOnly
value?
Perhaps an initialization or setup scenario of a ha vault does not make sense as anyone with access to the ingress would be able to init it. But if a ha vault somehow goes seal after being unseal it seems to be impossible to unseal it without access to kubernetes since the ingress starts returning HTTP 503
if the vault is sealed.
...If i wish to unseal it via UI or CLI it is impossible since the ingress is down.
What is the procedure to initialize / unseal the vault other than using
kubectl
?
I have similar concerns. Any update on this? This kind breaks UX for unsealing/initializing a vault and assumes that the minimum no. of key holders have access to a CLI. Perhaps assign vault-0
as the active vault to configure init values and/or unsealing.
Perhaps an initialization or setup scenario of a ha vault does not make sense as anyone with access to the ingress would be able to init it. But if a ha vault somehow goes seal after being unseal it seems to be impossible to unseal it without access to kubernetes since the ingress starts returning HTTP
503
if the vault is sealed.
We observed the same behaviour.
What we did is use the following command to directly port forward the Vault UI and perform the unseal
kubectl port-forward service/vault-ui 8200:8200
Then you can access Vault from your browser with https://localhost:8200. Proceed with the unsecure certificate error and then unseal your vault.
EDIT: My workaround works-ish but you have to do it for each pod. However I found this documentation taht has the true guidelines to bootstrap your cluster https://www.vaultproject.io/docs/platform/k8s/helm/examples/ha-with-raft
I think https://www.vaultproject.io/docs/platform/k8s/helm/examples/ha-with-raft is the way to go for getting your cluster initialized and setup using kubectl
.
If you want to access the UI of sealed nodes, you should set ui.activeVaultPodOnly = false
(which is the default).
In general, I don't know that it makes sense to have a service of sealed Vault nodes? Since the only operation you can do is unseal them and join them to the cluster, so they would immediately leave such a service.
But, it would be easy enough to make such a service and just have it select on vault-sealed: "true"
Since updating to Vault Helm Chart 0.6.0, when you have a new install of Vault which is uninitialized, vault can now no longer be accessed by ingress as there is no leader which can be routed via the "vault-active" kubernetes service.
Is there a know work around for this?
I have listed some of my configuration below: