Closed howieyuen closed 1 year ago
SealSecrets is consist of:
sealed-secrets-controller
kubeseal
SealedSecret
object has a template section which encodes all the fields you want the controller to put in the unsealed Secret
. This includes metadata such as labels or annotations, but also things like the type of the secret.
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: mysecret
namespace: mynamespace
annotations:
"kubectl.kubernetes.io/last-applied-configuration": ....
spec:
encryptedData:
.dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....
template:
type: kubernetes.io/dockerconfigjson
# this is an example of labels and annotations that will be added to the output secret
metadata:
labels:
"jenkins.io/credentials-type": usernamePassword
annotations:
"jenkins.io/credentials-description": credentials from Kubernetes
After decrypting, generate a raw k8s secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: mynamespace
labels:
"jenkins.io/credentials-type": usernamePassword
annotations:
"jenkins.io/credentials-description": credentials from Kubernetes
ownerReferences:
- apiVersion: bitnami.com/v1alpha1
controller: true
kind: SealedSecret
name: mysecret
uid: 5caff6a0-c9ac-11e9-881e-42010aac003e
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ewogICJjcmVk...
As you can see, the generated Secret resource is a "dependent object" of the SealedSecret and as such it will be updated and deleted whenever the SealedSecret object gets updated or deleted.
kubeseal
uses an asymmetric encryption algorithm, and the encrypted result can only be decrypted by sealed-secrets-controller
. kubeseal
will fetch the certificate from the controller at runtime (requires secure access to the Kubernetes API server); or use kubeseal --fetch-cert > mycert.pem
to store the certificate somewhere (e.g. local disk) and use kubeseal --cert mycert.pem
is available offline. The certificate is also printed to the controller log on startup.
Helm secrets is capable of leveraging Helm to template secrets resources.
If you work in a large team with several namespaces and you use Helm already, you might find Helm secrets more convenient than Sealed secrets. If you work as part of a small team this could be a minor issue.
Helm secret has another advantage over Sealed Secrets - it's using the popular open-source project SOPS (developed by Mozilla) for encrypting secrets. SOPS supports external key management systems, like AWS KMS, making it more secure as it's a lot harder to compromise the keys.
With that said, Helm Secrets and Sealed Secrets share the same issues - to use them, you must have permissions to decrypt the secrets. However, if you want to reduce your blast radius, you might not want to hand over the keys to your secrets to every DevOps and Developer in your team.
Also, Helm Secrets is a Helm plugin, and it is strongly coupled to Helm, making it harder to change to other templating mechanisms such as kustomize.
When encrypting a secrets file located in the helm_vars
directory using helm secerts enc
, the helm secrets plugin uses the public key to encrypt the content.
When decrypting, use helm secrets dec
to decrypt the encrypted content and add it to the values.yaml
file. Subsequent uses can directly access the values in this file.
External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.
The goal of External Secrets Operator is to synchronize secrets from external APIs into Kubernetes. ESO is a collection of custom API resources:
ExternalSecret
describes what data should be fetched, how the data should be transformed and saved as a Kind=Secret
SecretStore
specifies how to access the external API. The SecretStore
maps to exactly one instance of an external API.ClusterSecretStore
is a cluster scoped SecretStore
that can be referenced by all ExternalSecrets
from all namespacesThey provide a user-friendly abstraction for the external API that stores and manages the lifecycle of the secrets for you.
The following code example uses Vault as the backend and uses a static token:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
# vault kv put secret/foo my-value=s3cr3t
server: "http://vault.default.svc.cluster.local:8200"
path: "secret"
version: "v2"
auth:
# points to a secret that contains a vault token
# https://www.vaultproject.io/docs/auth/token
tokenSecretRef:
name: "vault-token"
key: "token"
---
apiVersion: v1
kind: Secret
metadata:
name: vault-token
data:
token: cm9vdA== # "root"
The following code example creates an ExternalSecret and references the SecretStore created in the previous step:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: vault-example
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: example-sync
data:
- secretKey: foobar
remoteRef:
key: secret/foo
property: my-value
---
# will create a secret with:
kind: Secret
metadata:
name: example-sync
data:
foobar: czNjcjN0
ESO synchronizes ExternalSecrets in the following ways:
spec.secretStoreRef
to find the appropriate SecretStore
. If it does not exist or the spec.controller
field does not match, it will be discarded and not processed.SecretStore.spec
.ExternalSecret
and decodes the secret if necessary.Kind=Secret
based on the template provided by ExternalSecret.target.template
. Secret.data
can be templated with secret values from external APIs.Secrets Store CSI Driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a Container Storage Interface (CSI) volume.
The Secrets Store CSI Driver secrets-store.csi.k8s.io
allows Kubernetes to mount multiple secrets, keys, and certs stored in enterprise-grade external secrets stores into their pods as a volume. Once the Volume is attached, the data in it is mounted into the container’s file system.
The SecretProviderClass is a namespaced resource in Secrets Store CSI Driver that is used to provide driver configurations and provider-specific parameters to the CSI driver.
Here is an example of a SecretProviderClass resource:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-database
spec:
provider: vault
parameters:
vaultAddress: "http://vault.default:8200"
roleName: "database"
objects: |
- objectName: "db-password"
secretPath: "secret/data/db-pass"
secretKey: "password"
Reference the SecretProviderClass in the pod volumes when using the CSI driver:
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: vault-database
According to the analysis and summary of the above solutions, the current solutions for sensitive information are roughly divided into two categories:
If we use cipher text of secret, there are 3 questions to think about:
Taking SealSecrets as an example to answer these 3 question:
kubeseal
is responsible for cert rotation.sealed-secrets-controller
will do the decryption.SealedSecret
objectIn this regard, KusionStack needs to build a complete set of encryption and decryption suites. It can be placed in Runtime or used as an independent encryption and decryption component, providing a set of APIs to be integrated with kusion.
If we use references, there are 3 questions to think about:
Taking ESO as an example to answer these 3 question:
SecretStore
contains references to secrets which hold credentials to access the external API.Kind=Secret
In this regard, KusionStack does not need to do too much extra work, especially the most critical encryption and decryption actions are handed over to the corresponding Secret Management to complete. KusionStack needs to provide schemas for connecting to various platforms in the Konfig library.
KusionStack needs the both 2 plans, for individual usage, user can is responsible for key pair management; for enterprise usage, we need an external secret management or even an identity service to control authentication and authorization.
How to do with crossing config management structure?
Comprehensive research and thought.
From my point of view, the important things are :
The Go Cloud Development Kit (Go CDK) allows Go application developers to seamlessly deploy cloud applications on any combination of cloud providers.
# install
brew install vault
# run in dev mode
vault server -dev
# vault server
export VAULT_ADDR='http://127.0.0.1:8200'
# vault token
export VAULT_TOKEN='hvs.F9A6wK2FaaJbHkbw3nJ8sbuC'
# enable the Transit secrets engine
vault secrets enable transit
# create a named encryption key
vault write -f transit/keys/my-key
# encrypt some plaintext data using the /encrypt endpoint with a named key
vault write transit/encrypt/my-key plaintext=$(echo "my secret data" | base64)
# decrypt a piece of data using the /decrypt endpoint with a named key
vault write -field=plaintext transit/decrypt/my-key ciphertext=vault:v1:8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w== |base64 -d
It's a symmetric encryption algorithm, an out-of-box and built-in solution.
func defaultSecretsManager(str) *Manager
go-cloud/secrets
support "awskms", "azurekeyvault", "gcpkms" and "hashivault".
func cloudSecretsManager(secretsProvider, encryptedKey str) *Manager
// Manager provides the interface for providing stack encryption.
type Manager interface {
Type() string
// An opaque state, which can be JSON serialized and used later to reconstruct the provider when deserializing
// the deployment into a snapshot.
State() interface{}
// Encrypter returns a `config.Encrypter` that can be used to encrypt values when serializing a snapshot into a
// deployment, or an error if one can not be constructed.
Encrypter() (config.Encrypter, error)
// Decrypter returns a `config.Decrypter` that can be used to decrypt values when deserializing a snapshot from a
// deployment, or an error if one can not be constructed.
Decrypter() (config.Decrypter, error)
}
New steps:
secrets-provider
, URL parsedDepends on:
enc --secrets-provider="hashivault://my-key"
dec --secrets-provider="hashivault://my-key"
# cloud secret manager
secretsProvider: hashivault://my-key
# defualt secret manager
encryptedKey: dmF1bHQ6djE6bWptUEpGNXg4ZVlDYTl1SVJqU2kwNGZucjArT2M3NG82MDhPZWVvajFLcVZOTmhmNUM2Zld5V1g4b0wxZ2Rsa3ZsRCt6UWRydGt6bkY5bG4=
The "vals" is a tool for managing configuration values and secrets. Supported backends:
Echo
ref+echo://KEY1/KEY2/VALUE[#/path/to/the/value]
File
ref+file://relative/path/to/file[#/path/to/the/value]
ref+file://absolute/path/to/file[#/path/to/the/value]
EnvSubst
ref+envsubst://$VAR1
Terraform (tfstate)
ref+tfstate://relative/path/to/some.tfstate/RESOURCE_NAME
ref+tfstate:///absolute/path/to/some.tfstate/RESOURCE_NAME
SOPS powered by sops
ref+sops://base64_data_or_path_to_file?key_type=[filepath|base64]&format=[binary|dotenv|yaml]
ref+sops://base64_data_or_path_to_file#/json_or_yaml_key/in/the_encrypted_doc
Vault
ref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&token_file=PATH/TO/FILE&token_env=VAULT_TOKEN&namespace=VAULT_NAMESPACE]#/fieldkey
ref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&auth_method=approle&role_id=ce5e571a-f7d4-4c73-93dd-fd6922119839&secret_id=5c9194b9-585e-4539-a865-f45604bd6f56]#/fieldkey
ref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&auth_method=kubernetes&role_id=K8S-ROLE
AWS SSM Parameter Store
ref+awsssm://PATH/TO/PARAM[?region=REGION]
ref+awsssm://PREFIX/TO/PARAMS[?region=REGION&mode=MODE&version=VERSION]#/PATH/TO/PARAM
AWS Secrets Manager
ref+awssecrets://PATH/TO/SECRET[?region=REGION&version_stage=STAGE&version_id=ID]
ref+awssecrets://PATH/TO/SECRET[?region=REGION&version_stage=STAGE&version_id=ID]#/yaml_or_json_key/in/secret
ref+awssecrets://ACCOUNT:ARN:secret:/PATH/TO/PARAM[?region=REGION]
AWS S3
ref+s3://BUCKET/KEY/OF/OBJECT[?region=REGION&profile=AWS_PROFILE&version_id=ID]
ref+s3://BUCKET/KEY/OF/OBJECT[?region=REGION&profile=AWS_PROFILE&version_id=ID]#/yaml_or_json_key/in/secret
AWS KMS
ref+awskms://BASE64CIPHERTEXT[?region=REGION&profile=AWS_PROFILE&alg=ENCRYPTION_ALGORITHM&key=KEY_ID&context=URL_ENCODED_JSON]
ref+awskms://BASE64CIPHERTEXT[?region=REGION&profile=AWS_PROFILE&alg=ENCRYPTION_ALGORITHM&key=KEY_ID&context=URL_ENCODED_JSON]#/yaml_or_json_key/in/secret
Google GCS
ref+gcs://BUCKET/KEY/OF/OBJECT[?generation=ID]
ref+gcs://BUCKET/KEY/OF/OBJECT[?generation=ID]#/yaml_or_json_key/in/secret
GCP Secrets Manager
ref+gcpsecrets://PROJECT/SECRET[?version=VERSION]
ref+gcpsecrets://PROJECT/SECRET[?version=VERSION]#/yaml_or_json_key/in/secret
Azure Key Vault
ref+azurekeyvault://VAULT-NAME/SECRET-NAME[/VERSION]
GitLab Secrets
ref+gitlab://my-gitlab-server.com/project_id/secret_name?[ssl_verify=false&scheme=https&api_version=v4]
# default config
# VAULT_ADDR: 127.0.0.1:8200
# VAULR_TOKEN: ~/.vault-token
echo "foo: ref+vault://secret/foo#/foo" | vals eval
# specify host and proto
echo "foo: ref+vault://secret/foo?proto=http&&host=127.0.0.1:8200#/foo" | vals eval
# specify address
echo "foo: ref+vault://secret/bar?address=http://127.0.0.1:8200#/bar" | vals eval
import "github.com/variantdev/vals"
runtime, err := vals.New(vals.Options{})
if err != nil {
panic(err)
}
valsRendered, err := runtime.Eval(map[string]interface{}{
"inline": map[string]interface{}{
"foo": "ref+vault://secret/foo?proto=http&&host=127.0.0.1:8200#/foo",
"bar": map[string]interface{}{
"baz": "ref+vault://secret/bar?address=http://127.0.0.1:8200#/bar",
},
},
})
Step 1: label securing stack.yaml
name: dev
labels:
kusionstack.io/secure: true
Step 2: replace before preview
// Replace ref secrets if needed
secure, ok := stack.Labels[projectstack.LabelSecure].(bool)
if ok && secure {
if err := vals.ReplaceRefs(planResources); err != nil {
return err
}
}
Step3: helper command: evaluate reference secret
kusion eval ref+vault://secret/foo#/foo
Common Sense: DONOT save cipher text or sensitive data in git repo.
Feature Request
Let me introduces 2 scenes here:
Since Konfig is a public repository, it is obviously inappropriate to store the sensitive information in the above scenarios in a large repository. Therefore, this issue explores solutions for maintaining sensitive information in a large repository.