Closed kiich closed 2 years ago
Hi @kiich, you may also want to try passing the namespace via the X-Vault-Namespace header, instead of in the URL path: https://www.vaultproject.io/docs/enterprise/namespaces#usage
hey @tvoran - thanks for the tip! I've tried it via the header and removing the path from the url - same error of
{"errors":["permission denied"]}
good to know you can do it via header though - i missed that in the doc.
I'm kind of at a loss with this one - i know it should work because all the config looks good to me... and vault journald logs as well as audit logs does not show anything
Another trick for figuring out the correct URL path is to get it working with the vault CLI, and then use the -output-curl-string
option to get the equivalent curl command.
vault write -output-curl-string auth/kubernetes/login role=demo jwt=...
And in your case you'd want to set the namespace option or environment variable as well.
This sort of question is a good one to ask in the discuss forums too: https://discuss.hashicorp.com/c/vault
Oh that's nice to know @tvoran thank you - i did not know about that option to vault cli!
i've ran that and the only things that was different to my curl
and the output from the cli were:
curl -X PUT -H "X-Vault-Request: true" -H "X-Vault-Token: $(vault print token)" ...
i assumed I don't need the X-Vault-Token
as I didn't see that in the example from tutorial neither?
I've tried it none the less but still get the dreaded {"errors":["permission denied"]}
.
I've double and triple checked the vault role and policy side to ensure it does have the service account name and namespace as part of Audience
's Bound
parameters so I can't think what else it could be!
Did the the login work using the vault CLI?
And where are you getting the jwt to login with? I usually grab it from a /var/run/secrets/kubernetes.io/serviceaccount/token
in a running Pod, since the projected token changes in recent k8s versions means there's no longer a Secret created in the namespace.
You can also decode the jwt you're using to verify it's for the service account and namespace it should be with something like https://github.com/mike-engel/jwt-cli
Did the the login work using the vault CLI?
no it didn't - which does make me think it must be the vault role/policy side but i have verified that that is set up correctly...
Good to know about the /var/run/secrets/kubernetes.io/serviceaccount/token
- i was actually using the Secret
token decoded (which I'm sure you know but has the iss
incorrectly set) so i was disabling the iss
in vault but the token inside the POD seems to have this set correctly.
I've decoded the jwt via jwt.io and it all looks ok to me - but yet the "permission denied" persists.
Have you tried enabling debug-level logging on the Vault server side? I think that will allow the kubernetes auth plugin to log why the login was unauthorized.
Another guess here, but I've accidentally reproduced this when I forgot to base64 decode the token from the Secret. i.e. you'll need to do something like this before attempting a vault login:
kubectl get secrets <secret name> -o jsonpath='{ .data.token }' | base64 -d
I'd also suggest copying the token from /var/run/secrets/kubernetes.io/serviceaccount/token
and trying that with curl/vault cli since that's the same token the vault-agent sidecar would use.
Hey @tvoran - thanks so far for all of your suggestions/feedback - super appreciate it.
Have you tried enabling debug-level logging on the Vault server side? Ah not yet as the Vault backend is looked after by another team in my org as it is enterprise version so we have a separate team looking after it. I did also think of this as the journald log i can get access to doesn't seem to show me much.
Another guess here, but I've accidentally reproduced this when I forgot to base64 decode ... Ahh yeah this bit i made sure i was using the decoded version. thanks for checking on this.
I'd also suggest copying the token from /var/run/secrets/kubernetes.io/serviceaccount/token Ah so after your previous message, I started using this token value instead now as it seems to contain a valid
iss
value (as in, the oidc value for my EKS cluster)
The only thing i can think now is the vault policy - because when i specify an invalid role in the curl data, i do actually get a message saying:
Code: 400. Errors:
* invalid role name "dummy-role"
the policy attached to the correct role is for my specific secret only though as i assumed i didn't need a special policy for doing the login
operation?
Also to add extra information - sorry, maybe I should have said this before - I am connecting to Vault Enterprise which is using Vault namespace. And my auth backend is mounted like this:
namespace i login to from Vault UI = "lab" auth backend is at "my-env/kubernetes/my-cluster"
so i have another vault namespace calledmy-env
under the vault namespace lab
and that's where the auth kube backend is mounted on with name my-cluster
- I am passing all this in my curl
command so I didn't think this setup will be a problem though?
So my curl
looks like:
curl ... https://my-vault-enterprise/v1/lab/my-env/auth/kubernetes/my-cluster/login
Ok for your nested namespace setup, either of these should work:
vault write -output-curl-string -namespace lab/my-env auth/kubernetes/my-cluster/login role=internal-app jwt=...
vault write -output-curl-string lab/my-env/auth/kubernetes/my-cluster/login role=internal-app jwt=...
I'd suggest getting the vault CLI working and then use -output-curl-string
to determine the right curl arguments.
Also note that if you're putting the full namespace path in the url path, you should not use the X-Vault-Namespace header.
Thanks @tvoran - the 2nd one is what i've been using and i can see that it is connecting to vault because if i put invalid namespace in the vault role side, the error i get changes to * namespace not authorized
once i correct the vault role, i then get the permission denied
again.
i checked the jwt and expiry looks fine to me also - 1 year ahead so everything looks ok to me as well as the namespace and serviceaccount name.
i am just lost on why this is not working now and without some Vault log, i fear it is just impossible to know why - so i am happy to close this because i don't think what we are doing from client side is wrong.
Can you show the annotations you're using on a Pod that's auth'ed to Vault successfully?
Sure @tvoran - we are using the standard set plus istio specific one which I didn't think mattered with login
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-perms-secrets.json: "0644"
vault.hashicorp.com/agent-inject-secret-secrets.json: my-secret-path/kv-v2/data/secrets
vault.hashicorp.com/agent-inject-status: injected
vault.hashicorp.com/agent-inject-token: "false"
vault.hashicorp.com/agent-run-as-user: "100"
vault.hashicorp.com/namespace: lab/my-env
vault.hashicorp.com/role: internal-app
Ok from those annotations, it looks like the auth method is mounted at the default spot, so the CLI command would be something more like:
vault write -output-curl-string lab/my-env/auth/kubernetes/login role=internal-app jwt=...
You can see the exact paths that are being used by checking /home/vault/config.json
in the vault-agent container that's injected into your Pod.
Hey @tvoran
Oh that's odd - because the k8s auth backend is mounted not auth/kubernetes
but at auth/kubernetes/my-cluster
this is the config.json file
cat /home/vault/config.json
{"auto_auth":{"method":{"type":"kubernetes","mount_path":"auth/kubernetes/my-cluster","namespace":"lab/islands"...
so surely i need my-cluster
in the path of /login
right?
hmm I am super confused @tvoran
I am exec-ed in the vault-agent of POD that i know has vault secret fetched so i know it's working.
but from that vault-agent, if i do the vault write -namespace lab/islands auth/kubernetes/my-cluster/login role="my-role" jwt="..."
low and behold, I get "permission denied" still!!
how can this be when:
the mount path - i got it from /home/vault/config.json
the namespace - i got it from /home/vault/config.json
the role - i got it from /home/vault/config.json
the jwt - i got it from /var/run/secrets/kubernetes.io/serviceaccount/token
file
but yet, the vault
command still gives me permission denied?!
And i confirmed the jwt
decoded does have right service account and namespace for the role i am using... 🤔
Hey @tvoran Oh that's odd - because the k8s auth backend is mounted not
auth/kubernetes
but atauth/kubernetes/my-cluster
this is the config.json filecat /home/vault/config.json {"auto_auth":{"method":{"type":"kubernetes","mount_path":"auth/kubernetes/my-cluster","namespace":"lab/islands"...
so surely i need
my-cluster
in the path of/login
right?
The ~env variable AGENT_INJECT_VAULT_AUTH_PATH: "auth/kubernetes/my-cluster"
~ chart option injector.authPath="auth/kubernetes/my-cluster"
is probably set on your injector deployment then. That'll set the default auth path.
If you're on a pod where an injected vault-agent successfully auth'ed, a command constructed from its /home/vault/config.json
should work. i.e.
VAULT_ADDR=<vault.address> vault write -namespace <auto_auth.method.namespace> <auto_auth.method.mount_path>/login role=<auto_auth.method.config.role> jwt=$(cat <auto_auth.method.config.token_path>)
If that's not working then I'd suggest looking at the communication path between Vault CLI and Server. You mentioned istio, so that may have something to do with it?
Thanks for the info @tvoran indeed
The env variable AGENT_INJECT_VAULT_AUTH_PATH: "auth/kubernetes/my-cluster" this was set on the injector deployment so that makes sense now.
I've double checked the /home/vault/config.json
and i can find everything EXCEPT for auto_auth.method.config.token_path
in there so I have no idea how it's finding the jwt
to use!
sure enough, if i run:
vault agent -config=/home/vault/config.json
from the vault agent container, i do get
2022-08-08T09:22:03.932Z [INFO] sink.file: creating file sink
2022-08-08T09:22:03.932Z [INFO] sink.file: file sink configured: path=/home/vault/.vault-token mode=-rw-r-----
2022-08-08T09:22:03.932Z [INFO] sink.server: starting sink server
2022-08-08T09:22:03.932Z [INFO] template.server: starting template server
2022-08-08T09:22:03.932Z [INFO] (runner) creating new runner (dry: false, once: false)
2022-08-08T09:22:03.932Z [INFO] auth.handler: starting auth handler
2022-08-08T09:22:03.932Z [INFO] auth.handler: authenticating
2022-08-08T09:22:03.933Z [INFO] (runner) creating watcher
2022-08-08T09:22:04.039Z [ERROR] auth.handler: error authenticating:
error=
| Error making API request.
|
| URL: PUT https://my-vault/v1/lab/islands/auth/kubernetes/my-cluster/login
| Code: 403. Errors:
|
| * permission denied
backoff=1s
BUT what i don't understand is, how is it all working when the POD spins up (unless the vault agent init is doing something else/extra/special?)??
At this point, i'm kind of ready to just let it go since the POD is working - meaning it is able to get secret from vault find with vault agent init and vault agent sidecar working... It would have been nice to be able to verify/simulate this from a curl
though! Thanks for all your help on this mystery!
I've double checked the /home/vault/config.json and i can find everything EXCEPT for auto_auth.method.config.token_path in there so I have no idea how it's finding the jwt to use!
Kubernetes auto-auth in the agent will default to /var/run/secrets/kubernetes.io/serviceaccount/token
if token_path
is not set, so no cause for alarm there. The injector started writing token_path
explicitly in the config somewhere around v0.13.0, so it sounds like you're using vault-k8s prior to that? (I've also tested this setup with vault-k8s v0.12.0 with no issues, but it's good to know details like vault-k8s version, vault version, etc. when debugging.)
I think running vault agent -config=/home/vault/config.json
should indeed work, and it does in my setup.
When I've tweaked the login parameters to see what errors are returned, the only way I could get a 403 with "permission denied" was when the path was incorrect (the namespace and/or auth_path). If it was the issuer, it should return a 500 with invalid issuer (iss) claim
, and if it was the service account name or namespace it should return a 403 with service account name not authorized
or namespace not authorized
.
It may also help to know the Vault server version, the vault-k8s version, and the vault agent version your system is using.
But since it sounds like it's all working otherwise, feel free to close this issue if you like 😃.
Thanks for looking into it further and apologies for not closing this issue @tvoran !
I've searched on github issues as well as Vault docs but couldn't find an exact match to both my error message (quite a few) AND the setup so I've decided to reach out here.
I have an EKS cluster running with Vault Enterprise backend (so with Vault namespace) and Kube auth backend setup. Using Vault k8s agent injector on the cluster side, i have it setup so i can annotate my deployment spec to have agent-init and sidecar injected and retrieve secret successfully from Vault. So all good there.
However, I wanted to follow the guide here https://www.vaultproject.io/docs/auth/kubernetes where it has a
curl
command that does the/login
with the JWT of my app's serviceaccount - this command is giving me a 403permission denied
message which I couldn't understand.My POD is able to get a secret fine from vault when i use the annotation to inject sidecar and so on so I know the Vault auth backend is correct and working - things like jwt for reviewer callback, connectivity from vault to EKS and so on are all verified to be correct and working.
it's just when I do
I always get 403 permission denied even though the Vault doc says I should be able to login. Things I verified:
1) JWT that was used to configure vault auth backend is correct 2) CA that was used to configure vault auth backend is correct 3) The service account assigned to my app that I do a curl from exists 4) The service account assigned to my app that I do a curl from is setup correctly in vault auth backend role section with correct namespace 5) The service account assigned to my app that I do a curl from has right secret and token from that secret decodes correctly
One thing to mention is my eks is running 1.21 - and i've read this https://www.vaultproject.io/docs/auth/kubernetes#kubernetes-1-21 - but i have confirmed that my
iss
is set correctly so no need to disable JWT validation.I am wondering if I need to pass in an extra header or something with curl
login
to work?