hashicorp / vault

A tool for secrets management, encryption as a service, and privileged access management
https://www.vaultproject.io/
Other
31k stars 4.19k forks source link

`vault operator raft snapshot save` and `restore` fail to handle redirection to the active node #15258

Open maxb opened 2 years ago

maxb commented 2 years ago

Scenario: A 3-node Vault cluster using Raft storage, accessed via a load-balanced URL which can contact any one of the unsealed nodes.

Attempt to use vault operator raft snapshot save:

If it lands on a standby node, a rather opaque error is produced:

Error taking the snapshot: incomplete snapshot, unable to read SHA256SUMS.sealed file

Attempt to use vault operator raft snapshot restore:

If it lands on a standby node, a rather opaque error is produced:

Error installing the snapshot: redirect failed: Post "http://172.18.0.11:8200/v1/sys/storage/raft/snapshot": read snapshot.tar.gz: file already closed
heatherezell commented 2 years ago

Hi there, @maxb! Thanks for this issue report. Our engineering teams are aware of this issue, and we have an item in the backlog to address it. (For my own internal tracking, it's VAULT-4568.) It hasn't been prioritized yet, however, so all I can currently say is to check out future release notes. :)

dtulnovgg commented 2 years ago

Same behavior. Any workarounds except executing snapshot operations on the leader node?

tcdev0 commented 2 years ago

i use a small backup script on every node. skipping snapshots on follower nodes.

...
# snapshot if leader
if [ "$(vault operator raft list-peers --format=json | jq --raw-output '.data.config.servers[] | select(.leader==true) | .node_id')" = "$(hostname -a)" ]; then
  echo "make raft snapshot $raft_backup/$time.snapshot ..."
  /usr/local/bin/vault operator raft snapshot save $raft_backup/$time.snapshot
else
  echo "not leader, skipping raft snapshot."
fi
pmcatominey commented 2 years ago

Traced the issue to #14269, the result is never updated here with the response of the redirected request.

maxb commented 1 year ago

Although the linked PR #17269 has rightly identified a logic bug which should be fixed, it doesn't wholly fix this issue.

Many people may be running Vault behind a loadbalancer, without direct access to individual backend nodes. Just making the vault CLI client process the redirection properly, won't help at all if it doesn't have network access to the redirected URL!

hardeepsingh3 commented 1 year ago

I'm also having the same issue while running Vault within AKS and running the raft snapshot save command on the leader raft pod. Any luck on a solution here?

fancybear-dev commented 1 year ago

We had a similar issue to this as well. I find it really weird there is no real solution for it from Hashicorp (proper redirection?), given raft in HA is advised as well.

We run a cluster template in HA of 5 VM's total with raft. We use a MIG in GCP. We had the issue, we couldn't reliably create snapshots, because it would only work if the request would end up at the leader. The load balancer does not allow for you to route it to specific VM"s -> which is logical -> it's a loadbalancer lol.

Our fix was to create a separate backend service, with health checks that check for /v1/sys/leader to see if is_self equals true. This creates a backend, that only sees a single VM as healthy -> the leader. The backend is only used for the related snapshot API call. The load balancer only routes to healthy VM's -> so it always routes correctly. Problem solved.

This tactic can also be used in other cloud environments, so perhaps this helps some people.

mohsen-abbas commented 8 months ago

We have consistently encountered the same issue with our Vault HA cluster on Kubernetes. Each time a new leader is elected, it necessitates the modification of the leader's VAULT_ADDR address in our cronjob. Essentially, we have set up a cronjob to regularly back up the Vault cluster and synchronize it with a GCP bucket.

Is there a way to dynamically determine the runtime leader and direct requests solely to the current leader of the cluster? Below is a snippet of the cronjob for your reference, and we welcome any further suggestions you may have. Your assistance is greatly appreciated.

apiVersion: batch/v1 kind: CronJob metadata: name: vault-snapshot-cronjob namespace: vault-secrets-server spec: schedule: "0 0 *" jobTemplate: spec: template: spec: serviceAccountName: vault-snapshotter volumes:

icc-garciaju commented 2 months ago

I'm experiencing the same issue after moving to internal storage despite even electing a new leader or performing the snapshot using the root token.

/ # vault operator raft list-peers
Node       Address                        State       Voter
----       -------                        -----       -----
vault-0    vault-0.vault-internal:8201    leader      true
vault-1    vault-1.vault-internal:8201    follower    true
vault-2    vault-2.vault-internal:8201    follower    true
/ # export 'VAULT_ADDR=https://vault-0.vault-internal:8200'
/ # vault operator raft snapshot save /dumps/vault-20240711-062200.snap
Error taking the snapshot: incomplete snapshot, unable to read SHA256SUMS.sealed file

/ # vault operator raft list-peers
Node       Address                        State       Voter
----       -------                        -----       -----
vault-0    vault-0.vault-internal:8201    follower    true
vault-1    vault-1.vault-internal:8201    leader      true
vault-2    vault-2.vault-internal:8201    follower    true
/ # export 'VAULT_ADDR=https://vault-1.vault-internal:8200'
/ # vault operator raft snapshot save /dumps/vault-20240711-062200.snap
Error taking the snapshot: incomplete snapshot, unable to read SHA256SUMS.sealed file
/ # vault operator raft snapshot inspect /dumps/vault-20240711-062200.snap
zenrabbit007 commented 1 month ago

Set the VAULT_ADDR environment variable to the vault-active Service, it will make sure the snapshot request is made against the leader node

export VAULT_ADDR=http://vault-active.vault.svc.cluster.local:8200