rancher-sandbox / rancher-desktop

Container Management and Kubernetes on the Desktop
https://rancherdesktop.io
Apache License 2.0
5.84k stars 272 forks source link

Securing local kubeconfig #2209

Open dee-kryvenko opened 2 years ago

dee-kryvenko commented 2 years ago

Problem Description

Kubeconfig on the local file system is a liability just like unencrypted SSH keys are. For the remote clusters it is typically taken care of by things like AWS IAM Authenticator for EKS, but for the local clusters - kubeconfig typically include client cert and key in plain text. Depends on the use case - you may want to have some sensitive stuff in that local cluster, for instance in my case - I run my infra tooling (terraform/helm/etc) in my local cluster that have access to manage remote clusters via secrets I store on the local cluster. We need a way to protect that kubeconfig.

Proposed Solution

Surprisingly, I wasn't able to find off-the-shelf kubectl exec auth plugins implementing local OS keychains/keyrings. So I've created my own https://github.com/plumber-cd/kubectl-credentials-helper last night. I haven't had a chance to give it much testing yet, but I was wondering if you would like to entertain the idea to incorporate something like this in RD.

Additional Information

Beyond kubectl - there's obviously things like limactl and rdctl that needs some protection too, but here I want to focus on the kubectl to begin with something.

jandubois commented 2 years ago

Can you expand on what your actual threat model is?

Do you want to have credentials encrypted at rest, so they can't be extracted from the disk, or backup?

What are the types of attack you want to protect against?

dee-kryvenko commented 2 years ago

Mainly malware or other types of untrusted code that otherwise may either steal it or use it to deploy a bitcoin miner to my local cluster. Keychain typically asks for a user prompt every time something tries to use the secret unless user chose to Allow Always.

jandubois commented 2 years ago

untrusted code that otherwise may either steal it or use it to deploy a bitcoin miner to my local cluster.

I see. I think it will be hard/impossible to prevent the local user from accessing the VM, and once you are running as root inside the VM there is really no way you can protect the cluster from the user anymore.

Even if you can prevent access to the VM over ssh somehow (which probably breaks Lima), and over serial.sock, etc, then there is still the possibility of accessing the diffdisk image directly. Of course this requires a targeted attack, so is unlikely to happen by some random malware, but I just wanted to point out that the VM should not be considered a security barrier.

I do find your kubectl-credentials-helper interesting (because I dislike storing unencrypted secrets at rest), but am not sure if I would want to incorporate it (yet?) into RD. But please keep me updated if you expand on your idea!

Right now I would probably prefer to put this into the docs somewhere to let users configure/install themselves, to make it clear that we do not provide access control that prevents the owner itself from accessing their VM / container runtime / kubernetes cluster.

Conceptually the local cluster is a development tool, not a production setup, so requiring a password every time you interact with it seems not very usable (but I haven't actually tried it yet).

dee-kryvenko commented 2 years ago

I see. I think it will be hard/impossible to prevent the local user from accessing the VM, and once you are running as root inside the VM there is really no way you can protect the cluster from the user anymore.

I think there are ways, maybe with an encryption key/passphrase stored in RAM and asked at a RD start and maybe every time user tries to exec into VM shell, and have Lima VM drive encrypted with the same key.

I do find your kubectl-credentials-helper interesting

Thank you!

Right now I would probably prefer to put this into the docs somewhere to let users configure/install themselves, to make it clear that we do not provide access control that prevents the owner itself from accessing their VM / container runtime / kubernetes cluster.

Totally understand, that works too. As I'll explain in a second - this kind of security is totally not for everybody and is depending on a use case, so even if it were incorporated to RD - that probably shouldn't be enabled by default.

Conceptually the local cluster is a development tool, not a production setup

I don't want to make this message longer than it should be, but I am on a mission to propagate the idea of containerizing local toolset. So this statement is really depends on a use case. Using a local cluster as a development tool, obviously, is the most popular use case - but that doesn't mean it is the only one. Of course if all it is used for is testing code locally in isolation - that type of security would be an absurd overkill.

But I am a big proponent of containerizing local toolset, and I am really trying hard to normalize this pattern across the field. Unfortunately as of yet this use case remains rather exotic, but from the future proofing standpoint - I do not think this use case should be completely discounted.

See, what I am doing and suggesting others to do is - to never install your toolset (such as aws, or maybe terraform or helm) directly on your laptop. That way everyone on my team who is using these tools locally - would always execute them in a predictable environment, with predictable versions.

In other words, instead of running terraform apply - I'd run something like

docker run --rm -it -v $(pwd):/data -w /data -v ${HOME}/.aws:/root/.aws hashicorp/terraform:1.1.9 apply

That way the process of switching versions becomes super easy as you do not require tools such as tfenv. In fact - if normalized, this can totally replace various rbenv, pyenv and maybe even brew and chocolate. Well, at least for most cases. You still have to install git and Kubernetes itself somehow.

To take it a step further - tools can be containerized, packaged and distributed along with supplemental components, such as maybe some internal auth federation scripts or else. So I could build my own version of a terraform image, package it with some stuff like maybe jq or kubectl or maybe a Python runtime, and then - I could standardize on my Terraform modules in a way that they could safely assume certain dependencies would be always present in the runtime and used with the help of an external provider.

And as a cherry on the top - I am actually using Kubernetes instead of docker as an abstraction layer, because people may choose to install different container runtimes, which there are many, so Kubernetes provides this abstraction layer to run toolsets in containers. For that I have created and open sourced a tool runtainer that would essentially do what docker run does, but automatically discover locations of interest such as ~/.aws, env variables and ports to propagate into the containers with tools.

So, to sum it up - as you can see, my local k8s cluster is not "just a development cluster". No, it is - my local sandbox, my local jail, and my brew. It is actually used in a similar way local aws or terraform may be used, i.e. it may and will have elevated access to various infrastructure elements. Obviously it is not used as day-by-day operations style in the age of GitOps, but sometimes you do need to run aws, terraform and other tools locally.

so requiring a password every time you interact with it seems not very usable

That's actually a great point - maybe my helper can be extended to become some sort of an agent, so it remembers credentials after fetching in RAM, and does not require re-entering passwords every time, but maybe every X hours.