getsops / sops

Simple and flexible tool for managing secrets
https://getsops.io/
Mozilla Public License 2.0
15.89k stars 845 forks source link

Add Kubernetes Secret as a store format #401

Open wmedlar opened 5 years ago

wmedlar commented 5 years ago

First-party Kubernetes support would be a wonderful addition to sops, the ability to read from and write to secrets manifests. It shouldn't be too dissimilar to the YAML or JSON stores, with decrypted values base64'd and wrapped in a k8s.io/api/core/v1.Secret.

I wouldn't mind taking a stab at this, implementation seems straightforward enough, and it would be a good precursor to writing a custom sops-backed CRD a la Bitnami's SealedSecrets.

autrilla commented 5 years ago

Agreed, go for it! This is definitely something we want.

wmedlar commented 5 years ago

Fitting it into a store wasn't quite as trivial as I expected, so I'm looking into going full-on CRD route with this. @autrilla would you or other sops maintainers be willing to give feedback on a proposal for this feature before I dive too deep in?

autrilla commented 5 years ago

Sure, I’m willing to give feedback. What gave you trouble with the store approach?

On Thu 20. Dec 2018 at 21:46, Will Medlar notifications@github.com wrote:

Fitting it into a store wasn't quite as trivial as I expected, so I'm looking into going full-on CRD route with this. @autrilla https://github.com/autrilla would you or other sops maintainers be willing to give feedback on a proposal for this feature before I dive too deep in?

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/mozilla/sops/issues/401#issuecomment-449130380, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJ-V6tCg0dpnvlPJ8RpY-YgO7E1KVHNks5u6_c8gaJpZM4Y6tEE .

-- Thanks,

Adrian Utrilla

ajvb commented 5 years ago

@wmedlar fyi this is coming, but in a different form, with sops publish (https://github.com/mozilla/sops/pull/473) - The idea is that you will be able to publish secrets from sops to k8s secrets (as well as other services like Vault or AWS Secrets Manager).

mr-karan commented 5 years ago

The sops publish looks good but it doesn't really conform to Gitops way of doing things.

All cluster config must be in declarative state including the secrets. If people use tools like fluxctl which internally does kubectl apply, we still cannot use sops with it, because the decryption key must be present inside the cluster and K8s needs to know that it's a "special secret" not a regular.

I guess CRD could still make sense along with sops publish too, if the maintainers think it's useful I can probably take a look at this.

jvehent commented 5 years ago

The idea is to have sops publish as part of the provisioning process to send secrets to k8s secrets. We still respect the git way of doing things because sops documents are stored in git, decrypted during provisioning and sent to k8s.

Of course, that's just one way of doing things. YMMV.

hobti01 commented 5 years ago

What about an Admission Controller that publishes the declarative CRD to a Secret owned by the CRD?

Publish rules per namespace or cluster could be a separate CRD.

jlongtine commented 5 years ago

@hobti01 I've put some thought into this approach. I don't think it'd be too difficult to create a controller that takes something like:

apiVersion: "mozilla.org/v1"
kind: SOPSSecret
metadata:
  name: test_secret
spec:
  API_KEY: ENC[<encrypted_API_KEY>]
  PASSWORD: ENC[<encrypted_PASSWORD>]
  sops:
    kms:
    - <sops encryption info>
    lastmodified: '2019-04-22T19:52:23Z'
    mac: 
    unencrypted_suffix: _unencrypted
    version: 3.3.0

And created this:

apiVersion: v1
kind: Secret
metadata:
  name: test_secret
type: Opaque
data:
  API_KEY: <decrypted+base64ed_API_KEY>
  PASSWORD: <decrypted+base64ed_PASSWORD>

But, I haven't had time to work on implementing it yet. Anyone done anything around this?

wmedlar commented 5 years ago

@jlongtine I played around with the idea a few months back and got a proof-of-concept controller working with GCP KMS. The hard part is integrating a k8s controller in a way that doesn't diminish the user experience of sops.

Unfortunately I haven't had time to work on it since, but I can share the knowledge I gained if you'd like to pursue it.

cyphermaster commented 5 years ago

FYI There is a way to use sops in order to manage secrets : with kustomize-sops

early stage, but looks promising

skang0601 commented 5 years ago

I've had a lot of success just using helm-secrets: https://github.com/futuresimple/helm-secrets

I don't completely get what the controller architecture would achieve/solve at the moment however. Unless you want the target cluster to be the only entity that can decrypt the payload, but that seems to contradict the current workflow of SOPS.

The base64 encoding of the secret values isn't necessary as well since the k8 Secrets object accepts StringData when writing.

gitirabassi commented 5 years ago

I don't see why a CRD is usefull. More abstractions aren't needed here I think. I don't even undestand why a publish to K8S would be helpful, that could be subsituted by a sops -d <enc_file> | kubectl apply -f - in whichever CI/CD tool you're using. I feel that the only thing needed here is that the values from keys apiVersion, kind,type and metadata to not be encrypted. and that would need unencrypted_suffix to support []string instead string. Am I right?

hobti01 commented 5 years ago

A CRD is useful because of all the reasons for SealedSecrets

For us, a nice workflow with SOPS is:

Unfortunately we can't spare the cycles to contribute to this effort at the moment, but we really like sops and will keep watching :)

devstein commented 4 years ago

Related project for those that end up here: https://github.com/viaduct-ai/kustomize-sops . A good CRD-less approach

ryedin commented 4 years ago

Also related to this thread, fluxcd does now natively support sops content: https://github.com/fluxcd/flux/pull/2580

enote-kane commented 3 years ago

Just came to this topic due to actual need as well.

Any proposal done here seems to be focused on actually decrypting data and then storing them as secrets. However, that (from my perspective) contradicts the actual security goals as we all know that the secrets are not stored encrypted inside. Although that can be changed implementation-wise, it provides a false sense of security for beginners.

So, what I was looking forward to is some kind of interceptor/hook that allows me to mutate a Secret or ConfigMap while being mounted to a container (or just before container entrypoint/command execution), so that the decrypted data is mounted, but not stored anywhere and executed in the context of the Pod/Service.

That way, not even the k8s cluster itself needs to have access to decryption keys, only the Pods that want to mount the encrypted data need to have an appropriate role or otherwise access to key data.

Maybe there already is something like that but if so, I missed it.

ryedin commented 3 years ago

@enote-kane all very valid concerns. the system that you're looking for that does all those things is Hashicorp's Vault. That said, what I mean by "those things" is the general idea of securely storing and distributing secrets at runtime (not specifically doing anything with k8s secrets resources).

However, there are also a couple things here that might be misconceptions...

That way, not even the k8s cluster itself needs to have access to decryption keys, only the Pods that want to mount the encrypted data need to have an appropriate role or otherwise access to key data.

The way sops works, (e.g. integration with a KMS provider) means no one ever gets any decryption keys. They are never distributed. The KMS service does all the decryption and never leaks the keys. When you create the service principal or IAM user that has encrypt/decrypt privileges against that KMS bucket, sops authenticates with the provider as that user, sends the encrypted file to the service over https, and receives the decrypted payload. No keys are ever leaked. So that's one thing.

The other thing is just about how k8s secrets work in general.

Firstly, they are provided to your pods in a safe manner using tmpfs (ramdisk): Secrets are only provided to nodes that have a scheduled pod that requires it, and the Secret is stored in tmpfs, not written to disk. When the last pod on a node that requires a Secret is deleted, the Secret is deleted from the node's tmpfs. Secrets are stored within a given namespace and can only be accessed by pods within the same namespace.

Secondly, depending on which k8s service you're using, it's highly likely these days that your cluster has encryption-at-rest enabled by default (unless you've rolled your own cluster), so the cluster-level storage of the secrets is very likely fine. For example if you're in Azure's AKS, the etcd disks are always encrypted at rest with no option to create a cluster that has that feature disabled.

Finally: it provides a false sense of security for beginners - absolutely none of this is for beginners xD.

ryedin commented 3 years ago

Oh, and give flux a good look. It works very nicely and allows you to safely store your encrypted k8s secrets files in your repo. It's really the perfect balance between safety and convenience IMO. Once you need more robustness you pretty much have one place to go next (Hashicorp Vault) and that comes with all kinds of additional infrastructure and required knowledge to manage (it's great but it has a very real cost to it)

enote-kane commented 3 years ago

@ryedin Thank you a lot for clearing things up for me here.

I'll have a deeper look into the existing solutions then to get a better understanding where exactly they hook in.

enote-kane commented 3 years ago

After taking a deep dive on this matter again, I am using the following approach now, which doesn't require to trust the secrets service or other parts of the system provided by the cloud provider, except for the memory of the node system:

apiVersion: v1
kind: Pod
metadata:
  ...
spec:
  securityContext:
    runAsUser: 12345
    runAsGroup: 12345
    fsGroup: 12345
    allowPrivilegeEscalation: false

  initContainers:
  - name: secrets-preprocessor
    image: ...
    env:
    - name: AWS_REGION
      value: eu-central-1
    - name: AWS_ROLE_ARN
      value: ...arn...
    - name: AWS_WEB_IDENTITY_TOKEN_FILE
      value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    command: sops
    args: --decrypt --output /decrypted/config.json /encrypted/config.json
    volumeMounts:
    - name: service-config
      mountPath: /encrypted
      readOnly: true
    - name: secrets-data
      mountPath: /decrypted
      readOnly: false
    - name: app-kms-token
      mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      readOnly: true
    - name: aws-iam-token
      mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
      readOnly: true

  containers:
  - name: ...
    image: ...
    command: ...
    args: ...
    volumeMounts:
    - name: secrets-data
      mountPath: /app/etc
      readOnly: true

  volumes:
  - name: service-config
    secret:
      secretName: my-sops-encrypted-config
  - name: secrets-data
    emptyDir:
      medium: Memory
      sizeLimit: 10Mi
  - name: app-kms-token
    secret:
      defaultMode: 420
      secretName: app-kms-token
  - name: aws-iam-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: sts.amazonaws.com
          expirationSeconds: 86400
          path: token

This way, we can also eliminate some of the risks as outlined in the official documentation, mainly:

Of course, protecting access to the key (or service accounts in my case) may impose another issue.

However, the biggest benefits for me on this approach are: