Closed dimitrigraf closed 2 years ago
I found the issue:
The secret path (path: key1.yaml
) was the issue. You cannot have the .yaml
file extension there. With path: key1
it works just fine. I just ended up with .yaml
because I did some deep dive in the code here and obviously misunderstood it.
This one might also add to the confusion.
Is this something to fix anyway or should the issue be closed?
Conversely, the path must have the .yaml
extension to be considered at all. Only YAML files are loaded. However, they must be valid YAML with a dictionary at the top level. You probably actually wanted to do something like this:
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/instance: netbox
name: netbox-extra
data:
secret.yaml: |
key1: dmFsdWUx #value1
key2: dmFsdWUy #value2
That would go with:
extraConfig:
- values:
FOO: bar
- secret: # same as pod.spec.volumes.secret
secretName: netbox-extra
items:
- key: secret.yaml
path: secret.yaml
optional: false
Does that work for you?
Thanks for the reply, Chris. And a happy new year!
So, I tried your suggestion but run into errors when trying to apply the secret.
Error from server (BadRequest): error when creating "temp.yaml": Secret in version "v1" cannot be handled as a Secret: v1.Secret.ObjectMeta: v1.ObjectMeta.TypeMeta: Kind: Data: decode base64: illegal base64 data at input byte 4, error found in #10 byte of ...|#value2\n"},"kind":"|..., bigger context ...|"key1: dmFsdWUx #value1\nkey2: dmFsdWUy #value2\n"},"kind":"Secret","metadata":{"annotations":{"kube|...
Not really surprising, since the keys within the keys are not base64 encoded. But encoding those as well probably would result in the rest of the mechanism failing. Or what's your take on this?
Also, did you see my message in an old conversation of ours? https://github.com/bootc/netbox-chart/issues/56#issuecomment-1000538894
Update:
The secret is valid if we use stringData:
instead of data:
. The actual secret values will then get encrypted when applying the secret.
So, the mechanism works, but the S3 credentials (access key and secret access key) still get ignored by netbox and media upload fails. That's what I was trying to get at in the other message linked above..
Ah yes, sorry, either use stringData
or base64 your YAML file and use data
.
For your storage config, the AWS credentials have to be under the STORAGE_CONFIG
top-level key. So you'd probably want something like this:
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/instance: netbox
name: netbox-extra
stringData:
secret.yaml: |
STORAGE_CONFIG:
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_STORAGE_BUCKET_NAME: my-netbox-bucket
Alternatively, I expect you can pass the bucket name only in the storage config, and supply the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
as environment variables. I haven't tested this, though.
Update 2:
I succeeded in specifying the storage config through the extraConfig
mechanism.
The secret:
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/instance: netbox
name: netbox-extra
stringData:
s3-credentials.yaml: |
STORAGE_CONFIG:
AWS_S3_ENDPOINT_URL: <endpoint-URL>
AWS_S3_REGION_NAME: <region>
AWS_STORAGE_BUCKET_NAME: <bucket-name>
AWS_ACCESS_KEY_ID: <access-key>
AWS_SECRET_ACCESS_KEY: <secret-key>
The reference in the values:
...
storageBackend: storages.backends.s3boto3.S3Boto3Storage
#storageConfig: # configured through extraConfig
extraConfig:
- secret: # same as pod.spec.volumes.secret
secretName: netbox-extra
items:
- key: s3-credentials.yaml
path: s3-credentials.yaml
optional: false
Is that config correct like this? with the storageConfig
commented out in the values.yaml
?
Yes that looks perfectly fine to me, if you prefer to use this mechanism.
Great! Thanks for the feedback. I was wondering if such cases should be more clearly documented and I'd be happy to do that and contribute to the chart that way. But I'm not sure if that's something you'd want in the docs/values.yaml
since it's kind of generic and there's all kinds of things you can do with that mechanism. What do you think?
I think an addition to the README.md
with a concrete example on how to use the extraConfig
mechanism would be very welcome indeed!
Alright, I'm going to do that, then! I've also just tested real quick, if it's possible to have part of the storage config in the values.yaml
and the keys in the secret but it doesn't work. The config from the secret overwrites the storage config from the values.yaml
. I personally prefer a proper grouping in this case, anyway. Otherwise the config scatters too much.
Good stuff. I'll create a PR once I've updated the docs.
Ah yes, I've just pushed a change that should allow deep-merging dictionaries (or mappings in YAML parlance) in extraConfig
, so your scenario of configuring some parts of the storageConfig
in the values and other parts (e.g. the secrets) in extraConfig
should now work.
Hey Chris
How do I contribute to the repo? I'd have done the following from looking at the latest PRs:
Is that how it works here on GitHub?
Yes, ideally base your commits on the develop
branch, and I'm very happy to accept PRs on GitHub. I'm happy enough to rebase if folks base their changes on the master
branch, and I'm also happy to accept MRs on GitLab, so whatever is easiest!
It looks like #78 covers off the reason I was keeping this issue open, so I'll mark it pending for now.
Alternatively, I expect you can pass the bucket name only in the storage config, and supply the
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
as environment variables. I haven't tested this, though.
I can confirm this approach works and is much cleaner than the extraConfig:
---
apiVersion: v1
data:
AWS_ACCESS_KEY_ID: ...
AWS_SECRET_ACCESS_KEY: ...
kind: Secret
metadata:
name: netbox-s3
namespace: netbox
type: Opaque
## Additional environment variables to set
extraEnvs:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: AWS_ACCESS_KEY_ID
name: netbox-s3
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: AWS_SECRET_ACCESS_KEY
name: netbox-s3
storageBackend: storages.backends.s3boto3.S3Boto3Storage
storageConfig:
AWS_STORAGE_BUCKET_NAME: 'bucket-name
AWS_S3_REGION_NAME: 'us-west-1'
This is included in chart version 4.1.0.
Hey there
Today I wanted to try the new
extraConfig
field and ran into some issues with it. I tried loading some values from a secret like it is described in thevalues.yaml
but the pods won't come up:the secret:
the part in the
values.yaml
:the error from the worker pod (more verbose):
I've been trying for hours but just couldn't find out whether there's an issue with my code or whether there's some bug somewhere.
Any ideas?