Open michalziobro opened 1 year ago
@michalziobro Hey. Thank you for reporting the issue.
I tried to replicate the same environment as yours. And looks like everything works fine for me. You are right, it should inject the 1password item value located on the path you provided.
I think the problem is in the way you're trying to verify that the secret is injected. When I did it in the same way as you
$(kubectl -n teama get pod -l app=app-example -o jsonpath="{.items[0].metadata.name}") \
--container app-example1 -- printenv USERNAME PASSWORD
It also prints the path instead of the value.
I think that kubectl
just reads the deployment yaml where the value is the path. But the secrets(values from 1password) will be injected in the pod as env var in runtime.
I can confirm that it's injected but getting the env var value directly on the client pod side and printing it to the logs. It prints a masked value
USERNAME: '<concealed by 1Password>',
PASSWORD: '<concealed by 1Password>',
In my case client pod is a node.js app. So, I added this console.log(process.env.USERNAME)
to print the value.
Could you please try to do something like that? Or interact with the values you injected in any other way (aka. connect to db and check that connection successful).
Hi @volodymyrZotov , thanks for answering!
Actually in my example kubectl is used to execute command inside a running container, in this case it's printenv USERNAME PASSWORD
. So it's not reading from the yaml manifest.
Seems that kubectl exec starts a new session inside a container, but 1Pass Injector injects secrets only in current session. I did some more tests, here's my manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-example
namespace: teama
spec:
selector:
matchLabels:
app: app-example
template:
metadata:
annotations:
operator.1password.io/inject: "app-example1"
labels:
app: app-example
spec:
containers:
- name: app-example1
image: busybox:latest
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
echo USERNAME $USERNAME, PASSWORD $PASSWORD;
sleep 10;
done;
# This app will have the secrets injected using Connect.
env:
- name: OP_CONNECT_HOST
value: http://onepassword-connect.onepassword.svc.cluster.local:8080
- name: OP_CONNECT_TOKEN
valueFrom:
secretKeyRef:
name: onepassword-token
key: token
- name: USERNAME
value: op://<vault-id>/<item-id>/username
- name: PASSWORD
value: op://<vault-id>/<item-id>/password
Running kubectl exec as described above gives the same output:
op://<vault-id>/<item-id>/username
op://<vault-id>/<item-id>/password
But if we examine logs we get what we want:
kubectl -n teama logs \ $(kubectl -n teama get pod -l app=app-example -o jsonpath="{.items[0].metadata.name}") \ --container app-example1
output:
USERNAME <concealed by 1Password>, PASSWORD <concealed by 1Password>
So, we can also do the injection in our newly created session, for example:
kubectl -n teama exec \
$(kubectl -n teama get pod -l app=app-example -o jsonpath="{.items[0].metadata.name}") \
--container app-example1 -- /op/bin/op run -- printenv USERNAME PASSWORD
I think those considerations (along with a fully working example manifest) should be mentioned somewhere in the docs.
Hi @michalziobro. I see. You are right, the injector injects the secrets in the current session only.
I'll update the docs and mention such a scenario. Thank you very much for pointing this out!
As a workaround, you can inject secrets to the new session as you suggested
kubectl -n teama exec \
$(kubectl -n teama get pod -l app=app-example -o jsonpath="{.items[0].metadata.name}") \
--container app-example1 -- /op/bin/op run -- printenv USERNAME PASSWORD
Or you can try 1password-operator. It would create k8s secret containing your 1password item.
Please let me know if you have any questions.
Hi @volodymyrZotov , we've been testing 1password-operator too. We're looking for possibilities to create a secure integration with 1Password where deployments in certain namespaces can access only selected vaults. For this secrets injector looks easier to setup, there's no need to deploy operator in each namespace.
I do have some more questions, what is the expected behavior when item in 1Password gets updated? According to my understanding and tests there's a need to manually restart pod for injector to re-inject new value to env vars.
And second one, there's a need to provide command
in container spec that will be modified by mutating webhook by appending this /op/bin/op run --
portion. That might be considered as a limitation - eg. when deploying manifests using helm charts - not all of them gives possibility to overwrite command
. Are you planning to implement any solution for such scenarios? One option could be to add an init container that will inject secrets to a shared volume.
Hi! I'm asking for some help with debugging why secret injection is not working.
Here's my setup:
1Password Connect Server deployed in k8s using helm chart, confirmed it's working fine with 1Password Kubernetes Operator
Installed secrets injector with a helm chart:
helm install onepassword-injector 1password/secrets-injector -n onepassword
Enabled injection for the namespace:
kubectl label namespace teama secrets-injection=enabled
Created k8s secret with 1Pass token and deployment like:
Secrets injector pod reported no errors:
Application pod reported no errors:
My understanding is that data from 1Password Item should be injected to application pod env vars so I'm checking with:
but the output is:
I'd expect to see it replaced with actual username and password from the Vault Item.
Confirmed item is created in 1Password:
op item get <vault-id> --vault <item-id> --fields label=username,label=password
Are you able to help me with this?