Open jorgemarey opened 3 months ago
Hi @jorgemarey,
Thanks for the report and sorry for the frustration.
Running a snapshot backup in a Nomad job seems like a common enough workflow that we should probably add a capability just to narrowly allow that use case:
operator {
capabilities = ["snapshot"]
}
This implies that operator.policy = "write"
would also allow snapshots which I think is acceptable.
We should also add fine grained capabilities for other operator endpoints such as rotate-keys
. This would allow you to give cluster administrators access to rotate keys in case of an emergency, but not give them all management tokens or even access to say the snapshot
capability directly.
Also currently there's no way (at least that I've seen) of attaching the same policy to different jobs, we need to create the same policy with different names to be able to do this.
This seems like a reasonable request to me. Mind opening a new issue if you have a preferred UX in mind? This is a distinct work item from the snapshot issue.
Hi @schmichael I think what you are suggesting sounds fine.
This seems like a reasonable request to me. Mind opening a new issue if you have a preferred UX in mind? This is a distinct work item from the snapshot issue.
I'll open a new issue for this. Thanks
We have a job that makes the backup of nomad (makes an snapshot and then uploads it to S3), currently we fetch the nomad token from the vault nomad secret backend. We tried to migrate this to workload identities so the job uses the jwt to connect to nomad, but as the snapshot API needs a management token we don't know how to associate that policy to a job. Is there any way to to that? Also currently there's no way (at least that I've seen) of attaching the same policy to different jobs, we need to create the same policy with different names to be able to do this.
Maybe there could be a separate mapping of default policies/roles per job/group/task that administrative users can set.