operate-first / support

This repo should serve as a central source for users to raise issues/questions/requests for Operate First.
GNU General Public License v3.0
15 stars 25 forks source link

As an OPF SRE, I want to safely store my secrets in HashiCorp Vault and use External secrets to deploy them, so I may securely track secrets via GitOps without risking exposure. #298

Closed tumido closed 2 years ago

tumido commented 3 years ago

Updated

Acceptance criteria:

tumido commented 3 years ago

/cc @HumairAK would you like to add something?

HumairAK commented 3 years ago

I think that covers the current usage. Though I would say it's not always clear where we would end up using sops in the future as new cases arise.

But let's deploy vault and see where it takes us.

tumido commented 3 years ago

Also.. most of the route certs are now handles by the ACME operator, so handling Route kind is not a hard requirement anymore.

dystewart commented 2 years ago

@HumairAK @tumido I'm working on deploying Hashicorp Vault to use with my Quicklab environment but I'm running into errors that are preventing my pods from starting, specifically ImagePullBackOff and ErrImagePull errors. I am able to install/deploy Hashicorp locally via helm chart and my pods are created in Openshift but get stuck in the pending state. I have tried the install with multiple versions of Hashicorp Vault and with multiple install parameters specified in the docs but get the same result. For reference here is the documentation I have been following: https://www.vaultproject.io/docs/platform/k8s/helm/openshift

Any idea where to look to find what's holding up the pods?

larsks commented 2 years ago

Based on the output of this script:

#!/bin/sh

tmpfile=$(mktemp encXXXXXX)
trap "rm -f $tmpfile" EXIT

find . -name '*.enc.yaml' -print | while read enc; do
    echo "checking $enc" >&2
    if ! sops -d $enc > $tmpfile; then
        continue
    fi

    if ! grep -q 'kind: Secret' $tmpfile; then
        echo "$enc"
    fi
done

We are encrypting the following non-Secret resources:

./cluster-logging/overlays/moc/zero/routes/route.enc.yaml
./kfdefs/overlays/moc/zero/opf-dashboard/route.enc.yaml
./kfdefs/overlays/moc/zero/opf-monitoring/datasource/opf-openshift-monitoring-grafanadatasource.enc.yaml
./kfdefs/overlays/moc/zero/opf-monitoring/datasource/opf-prom-datasource.enc.yaml
./keycloak/overlays/moc/infra/clients/rick.enc.yaml
./keycloak/overlays/moc/infra/clients/curator.enc.yaml
./keycloak/overlays/moc/infra/clients/infra.enc.yaml
./keycloak/overlays/moc/infra/clients/zero.enc.yaml
./keycloak/overlays/moc/infra/clients/smaug.enc.yaml
./keycloak/overlays/moc/infra/clients/balrog.enc.yaml
./keycloak/overlays/moc/infra/clients/demo.enc.yaml
./keycloak/overlays/moc/infra/realm.enc.yaml
./grafana/overlays/moc/smaug/grafanadatasource.enc.yaml
dystewart commented 2 years ago

In Quicklab I've deployed 2 pods (1 pod running an instance of HashiCorp Vault server in a project named "vault", and 1 pod running an instance of External Secrets in a project named "external-secrets") in my test cluster. The Vault server is running in dev mode for now, meaning it is unsealed by default, making experimentation easier.

The Vault configuration is as follows (commands run inside Vault pod) :

  1. Kubernetes authentication is enabled
vault auth enable kubernetes
  1. The auth enabled above uses the service account token mounted by the Vault pod, and the certificate from the cluster
vault write auth/kubernetes/config token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt issuer=https://kubernetes.default.svc
  1. A secret is created with the Vault KV Secrets engine
vault kv put secret/vault-demo-secret1 username="phil" password="notverysecure"
  1. Define access to the secret by creating an access control policy
vault policy write pmodemo - << EOF
path "secret/data/vault-demo-secret1"
  { capabilities = ["read"]
}
EOF
  1. Lastly, create a role to associate the Vault namespace and Vault service account with the secret's policy that was created in the previous step.
vault write auth/kubernetes/role/pmodemo1 bound_service_account_names=vault bound_service_account_namespaces=vault policies=pmodemo ttl=60m

In the steps above Kubernetes auth was configured and a secret and associated policy and role were created.

Now Within the external-secrets project we need to create the External Secrets controller and External Secret :

  1. Using Helm, deploy External Secrets. we have to set the VAULT_ADDR environment variable here to point external secrets to the Vault API
helm upgrade -i -n external-secrets external-secrets external-secrets/kubernetes-external-secrets --set "env.VAULT_ADDR=http://vault.vault.svc:8200"
  1. Create the External Secret using the manifest:
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: exsecret1
  namespace: vault 
spec:
  backendType: vault
  data:
    - key: secret/data/vault-demo-secret1
      name: password
      property: password
  vaultMountPoint: kubernetes
  vaultRole: pmodemo
oc create -f extsecret1.yml

So now we have a Kubernetes secret, that was created by the External Secrets controller, that contains the secret we made in the Vault.

Taking a closer look at the secret using Vault:

oc -n vault get secrets exsecret1

NAME        TYPE      DATA      AGE
exsecret1   Opaque    1         2m29s

The output of the secret in yaml format:

oc -n vault get secret exsecret1 -o yaml

apiVersion: v1
data:
  password: bm90dmVyeXNlY3VyZQ==
kind: Secret
...

The data in the external secret, the resource that will be stored in Github, does not contain the actual secret info we created in Vault. Rather it just contains a reference to the secret in our Vault.

Interesting note, everything to this point appeared to have run and created properly, except for when I try to actually investigate my external secret:

oc get es exsecret1

NAME        LAST SYNC   STATUS                        AGE
exsecret1   6s          ERROR, missing client token   8h

And the output of the exsecret1 manifest shows the same error at the very bottom:

oc get es exsecret1 -o yaml

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  creationTimestamp: "2021-11-05T18:27:55Z"
  generation: 1
  managedFields:
  - apiVersion: kubernetes-client.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        .: {}
        f:backendType: {}
        f:data: {}
        f:vaultMountPoint: {}
        f:vaultRole: {}
    manager: oc
    operation: Update
    time: "2021-11-05T18:27:55Z"
  - apiVersion: kubernetes-client.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        .: {}
        f:lastSync: {}
        f:observedGeneration: {}
        f:status: {}
    manager: unknown
    operation: Update
    time: "2021-11-05T18:27:55Z"
  name: exsecret1
  namespace: vault
  resourceVersion: "10024416"
  uid: fcb5ecd1-c527-4d45-b232-2fa73d56aa7b
spec:
  backendType: vault
  data:
  - key: secret/data/vault-demo-secret1
    name: password
    property: password
  vaultMountPoint: kubernetes
  vaultRole: pmodemo
status:
  lastSync: "2021-11-06T03:16:18.498Z"
  observedGeneration: 1
  status: ERROR, missing client token

Need to look into the error. Hopefully as simple as fixing the kv secrets engine path.

HumairAK commented 2 years ago

@dystewart this is awesome!! few questions:

  1. I see you deployed external secrets via helm, is there an operator for it? ideally on olm
  2. You put the secret at secret/vault-demo-secret1 but then the policy and external secrets reference secret/data/vault-demo-secret1 -- is that a typo?
  3. Is there any way the policies and auth be configured declaratively?
dystewart commented 2 years ago

Some updates: The Vault operator in the OperatorHub (version 4.9.5 OpenShift), called Vault Config Operator does not support the creation of Vault instances via its CRDs. There is another Vault operator available in version 4.8 OpenShift as pointed out by @HumairAK, but my Quicklab cluster is version (4.9.5).

For this reason I am attempting to use another operator, not included in the OperatorHub, called Vault Operator which is provided by Banzai Cloud. This operator also will allow us to configure features like auth and unsealing the vault. More specifics can be found here: https://banzaicloud.com/blog/vault-operator/

Installation of the Banzai Cloud Vault operator via kubectl is quite straight forward. Clicking the install box in the link below displays something like the following: https://operatorhub.io/operator/vault

Install Vault Operator on Kubernetes

  1. Install Operator Lifecycle Manager (OLM), a tool to help manage the Operators running on your cluster. $ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.19.1/install.sh | bash -s v0.19.1 Step 1 can be ignored for now... since we already have an OLM installed and running in the cluster.

  2. Install the operator by running the following command: $ kubectl create -f https://operatorhub.io/install/vault.yaml Step 2 is as simple as copy and paste while logged into the cluster

  3. After install, watch your operator come up using next command. $ kubectl get csv -n my-vault Step 3 is where we something has gone wrong as the command returns:

$ kubectl get csv -n my-vault
No resources found in my-vault namespace.

Taking at the new operator installation listed under installed operators we can see a couple errors:

error

The error is a bit misleading as the catalog source is not actually missing, rather the details in the subscription yaml manifest are a little different because we already had an installation of OLM in the cluster. For clarity, this manifest file is found under Installed Operators-> vault -> YAML tab. Here is the portion of the subscription manifest which is causing the CatalogSourcesUnhealthy error:

#subscription-my-vault.yaml
...
spec:
  channel: beta
  name: vault
  source: operatorhubio-catalog
  sourceNamespace: olm
...

This info is cluster specific and you need to change the source and source namespace to reflect those in use in your cluster. Source is looking for the name of a CatalogSource name and the sourceNamespace is looking for the namespace where said CatalogSource exists. You can find the CatalogSources and info relating to these in the cluster settings under OperatorHub. Changing the above snippet to reflect the following eliminates the missing catalog error:

#subscription-my-vault.yaml
...
spec:
  channel: beta
  name: vault
  source: certified-operators
  sourceNamespace: openshift-marketplace
...

Returning to the Details tab of our my-vault Subscription we

newerrors

The ResolutionFailed error persists and the reason is that, there is no vault Operator within the certified-operators CatalogSource. In fact there is no Vault Operator in any of the other 3 CatalogSources either.

HumairAK commented 2 years ago

@dystewart thanks for the update!

For this reason I am attempting to use another operator, not included in the OperatorHub, called Vault Operator which is provided by Banzai Cloud. This operator also will allow us to configure features like auth and unsealing the vault. More specifics can be found here: https://banzaicloud.com/blog/vault-operator/

I'm a little confused at which operator you're trying to deploy...the steps w/ the commands seem to suggest it is on operator hub, and https://operatorhub.io/install/vault.yaml points to resources from https://github.com/coreos/vault-operator ... which is by coreos and not Banzai Cloud.

dystewart commented 2 years ago

@HumairAK Strange that they pointed were pointing to resources from coreos not BanzaiCloud from the install instructions on their own site... I hadn't even noticed that while installing.

Either way here is the response to the issue I created upstream after having deprecation issues using that install method. https://github.com/banzaicloud/bank-vaults/issues/1474

Now working on installing the operator via the BanzaiCloud Vault Operator helm chart, as discussed and recommended in the issue linked above. I will be converting the helm charts into manifests. More updates to come

anishasthana commented 2 years ago

Hey @dystewart , are you aware of https://github.com/argoproj-labs/argocd-vault-plugin? This could be super relevant.

anishasthana commented 2 years ago

Hey folks, any updates here? Just an interested onlooker :-)

HumairAK commented 2 years ago

@dystewart how are things on this front? When do you think we can deploy hashicorp vault on smaug and begin converting secrets?

dystewart commented 2 years ago

@HumairAK @anishasthana I got held up recently putting together a walkthrough for some new interns but that should be wrapped up today, so updates are coming soon! As far as when we can deploy I'll have a much better estimation in the next couple of days

HumairAK commented 2 years ago

Sounds great thanks @dystewart !

larsks commented 2 years ago

I was looking at the vault helm chart recently, and it looks like it sets up a bunch of things we may not need. As far as I can tell, the cluster-scoped resources are (mostly) only necessary if we want to use the agent injector. If our plan is to use the external secrets controller instead, then we can eliminate a chunk of the resources that are included in the helm chart, leaving us with just the namespaced resources necessary to configure and run the vault server itself. I think that means just these files (extracted from the helm template into operate-first style trees):

Cluster-scoped resources:

Namespaced resources:

...and a PVC for storage, if we start with the file backend.

Is that crazy talk?

HumairAK commented 2 years ago

Well it doesn't hurt to try it, we'll have to create the namespace anyway, and this a very minimal set of resources.

@dystewart wdyt?

larsks commented 2 years ago

This is what I was able to put together for deploying an HA cluster: https://github.com/larsks/opf-vault

Deploying like this means there's a bunch of post-install configuration (setting up auth providers, etc). I don't know if that's the same for the operator install as well; I'll have to take a look because I'm curious how it compares.

HumairAK commented 2 years ago

If this helps us get going sooner so we can start migrating from sops, I think we should go ahead and add the generated manifests to smaug and deploy it. Assuming, in the future switching to an operator does not require to once again alter our secrets (for instance the addition of new crds/etc.). WDYT @dystewart @larsks ?

dystewart commented 2 years ago

Making good progress on this front using what @larsks put together https://github.com/larsks/opf-vault as inspiration/baseline. Currently putting together a working prototype in quicklab with external-secrets.

HumairAK commented 2 years ago

Awesome @dystewart keep us posted, feel free to throw a pr to the apps repo to add it to smaug when ready.

dystewart commented 2 years ago

@larsks @HumairAK Here is the official way to auto-unseal the vault: https://learn.hashicorp.com/tutorials/vault/autounseal-transit?in=vault/auto-unseal The process is relatively straightforward but it requires creating another instance of vault to run the transit secrets engine so in theory it saves a bit of configuration once started at the expense of needing another pod.

I'm a little confused how you would go about initially interacting with this extra vault instance if it were not in "dev mode" (unsealed and initialized by default) without manually unsealing this instance. And everywhere in the Hashicorp docs where a dev mode Vault is used there are warnings to never use it in production. So as far as I can tell if you wanted to do this with a non dev mode vault instance you'd need to initialize it and unseal using the shamir keys method. In other words it sounds like using auto unseal adds some extra unnecessary steps. Wdyt?

dystewart commented 2 years ago

I have successfully deployed an H.A instance of vault with 3 pods in a dev cluster. I've also installed the external-secrets operator via OLM, and deployed an instance of external-secrets with said operator.

Next steps are involving some self teaching of the different vault authentication methods and how to get the integration with external-secrets right.

larsks commented 2 years ago

So as far as I can tell if you wanted to do this with a non dev mode vault instance you'd need to initialize it and unseal using the shamir keys method. In other words it sounds like using auto unseal adds some extra unnecessary steps. Wdyt?

I think if we were deploying multiple vaults (e.g., with a master vault on the infra cluster and then additional vaults in our managed clusters) that maybe we could use the infra vault to auto-unseal the others, but I don't think that's a high priority right now.

dystewart commented 2 years ago

I've finally gotten a working instance of high availability vault with Raft storage back-end and external-secrets in the dev cluster. Admittedly, actual configuration and initialization of the vault is a bit cumbersome but in all it has helped to understand some of the confusions of Vault.

So basically what we need to do to get Vault ready for use with external-secrets is:

Vault deployment and config

- Init the `vault-0` pod (This will return 5 unseal keys and the Vault root token)
`$ oc exec -ti vault-0 -- vault operator init
Unseal Key 1: mxzvxTJfR0uGFjWZUh4iaqLJ+nAX7MtJ61hmbFr3+b42
Unseal Key 2: guirGUxD1ZUhWrkRfSLHc1purWSjBiqeFuuUU9wTKUgm
Unseal Key 3: 4uWJ8Lt2930JefCHO3R5oudzu67U3/TQh0KvcTu10fub
Unseal Key 4: CeRpXSdihfZrW4lsQKFh1dAGy+YatR5IbCLTY+8c7bxk
Unseal Key 5: nphsBm8He4QfkWzgIpLDUUu2h20NxcJwIz0QeicsJ/72

Initial Root Token: s.4wZXDrE5WlPNli7Zh8EOTNAB

Vault initialized with 5 key shares and a key threshold of 3. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 3 of these keys to unseal it
before it can start servicing requests.

Vault does not store the generated master key. Without at least 3 keys to
reconstruct the master key, Vault will remain permanently sealed!

Next I installed the external-secrets operator via OLM. I chose to install the operatpr for all namespaces so the operator itself is installed in the openshift-operators namespace. Once installed create a new operatorConfig instance using the newly installed operator. Default configuration is fine for the operatorConfig. Now we'll be able to create secretstore and externalsecrets instances that will be integrated with vault.

The next step is to configure the kubernetes auth endpoint in vault to communicate with our external-secrets instance.

Now with all the config out of the way it's time to create our externalsecret and secretstore. I found the templates for these objects that were packed in with the operator to be somewhat confusing with a huge amount of unnecessary fields. I had success using the manifests below:

secretstore.yaml

apiVersion: external-secrets.io/v1alpha1
 kind: SecretStore
 metadata:
   name: vault-backend
 spec:
   provider:
     vault:
       server: "http://vault.vault:8200"
       path: "kv"
       version: "v2"
       auth:
         kubernetes:
           mountPath: "kubernetes"
           role: "demo-role"

externalsecret.yaml

 apiVersion: external-secrets.io/v1alpha1
 kind: ExternalSecret
 metadata:
   name: vault-example
 spec:
   secretStoreRef:
     name: vault-backend
     kind: SecretStore
   target:
     name: example-sync
   data:
   - secretKey: foobar
     remoteRef:
       key: path/to/my/secret

With both of these applied, a new secret object should have been created an we should be able to see that the secretstore is synced with the vault

$ oc get externalsecrets 
NAME            AGE
vault-backend   11h

NAME            STORE           REFRESH INTERVAL   STATUS
vault-example   vault-backend   1h                 SecretSynced

We can also take a look at our secret to make sure it's in fact what we put in vault using kv earlier

$ kubectl get secrets example-sync -o jsonpath='{.data.foobar}' | base64 -d
secretpassword

It worked!

Taking a look at our example-sync secret

example-sync

kind: Secret
apiVersion: v1
metadata:
  name: example-sync
  namespace: openshift-operators
  uid: 678b99c0-d099-45f6-b9b7-995dadfafbbc
  resourceVersion: '3626588'
  creationTimestamp: '2022-02-04T06:38:31Z'
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"external-secrets.io/v1alpha1","kind":"ExternalSecret","metadata":{"annotations":{},"name":"vault-example","namespace":"openshift-operators"},"spec":{"data":[{"remoteRef":{"key":"path/to/my/secret","property":"password"},"secretKey":"foobar"}],"secretStoreRef":{"kind":"SecretStore","name":"vault-backend"},"target":{"name":"example-sync"}}}
    reconcile.external-secrets.io/data-hash: d28513ddb9bcb5d744845c3e88a35036
  ownerReferences:
    - apiVersion: external-secrets.io/v1alpha1
      kind: ExternalSecret
      name: vault-example
      uid: a7dab59c-42ce-4171-bbc8-771d655cf15f
      controller: true
      blockOwnerDeletion: true
  managedFields:
    - manager: external-secrets
      operation: Update
      apiVersion: v1
      time: '2022-02-04T06:38:31Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:data':
          .: {}
          'f:foobar': {}
        'f:immutable': {}
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
            'f:reconcile.external-secrets.io/data-hash': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":"a7dab59c-42ce-4171-bbc8-771d655cf15f"}': {}
        'f:type': {}
immutable: false
data:
  foobar: c2VjcmV0cGFzc3dvcmQ=
type: Opaque

We can see this secret holds only an encoded field where our secret data is held and the fetching from vault is done using external-secrets.

dystewart commented 2 years ago

Docs which come in handy for Vault specific operations

Policies Seal/Unseal KV Engine raft kubernetes auth

anishasthana commented 2 years ago

So what are your planned next steps? Do you have a service in mind to start using Vault with? (following a PR to opf/apps to get it merged and deployed?)

dystewart commented 2 years ago

Once the PR is merged the goal is actually to move all services that use KSOPS for secrets/encryption in the Smaug cluster over to Vault for secrets management.

HumairAK commented 2 years ago

for everything that's not a secret we can use patch operator :D

dystewart commented 2 years ago

Error message from vault statefulset (operator deployment)

Generated from statefulset-controller 575 times in the last 2 days create Pod vault-0 in StatefulSet vault failed error: pods "vault-0" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{1000}: 1000 is not an allowed group, spec.containers[0].securityContext.capabilities.add: Invalid value: "IPC_LOCK": capability may not be added, spec.containers[0].securityContext.capabilities.add: Invalid value: "SETFCAP": capability may not be added, spec.containers[2].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]

larsks commented 2 years ago

The resource you are trying to deploy has:

          securityContext:
            privileged: true

That means "run this pod without any constraints, providing access to all host devices and effectively make it root on the host". For obvious reasons it's not possible to request this sort of privileged access by default, but that also leads to the question: why does vault require this level of privilege?

To grant the access, you need to create a ClusterRoleBinding giving the service account the ability to use the privileged scc. That will look something like:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: vault-allow-privileged
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:privileged
subjects:
- kind: ServiceAccount
  name: vault
  namespace:  ???

(Note that the namespace will be filled in automatically by kustomize if you're deploying with kustomize and have namespace set in your kustomization.yaml file..)

anishasthana commented 2 years ago

I think it's worth investigating if you can minimize the privileges.

dystewart commented 2 years ago

@larsks,

@HumairAK and I had a lengthy discussion today while doing some debugging of the vault operators I have been looking back into again, namely vault-config-operator and the banzaicloud vault operator.

The vault-config-operator offers many custom resources that we're looking for (such as kubernetes auth endpoint, vault role, vault policy, etc), but the documentation is very confusing and I haven't had any luck connecting a vault instance with the operator. It seems the learning curve for using this one is very steep and there's no custom resource associated with the operator to deploy a vault instance.

The banzaicloud vault operator is documented better but there are a number of errors preventing the vault pods from creating, including but not limited to the vault service account needing to listed as a user in nearly every cluster scc. These security context related issues aren't something we have to deal with in the helm deployment. Additionally, there aren't any custom resources for dynamically configuring vault either.

Having said all that, Humair and I are thinking maybe it is best to roll with the helm deployment, which we at least have working (as shown above). @larsks What are your thoughts on this? We can discuss further in tomorrow morning's meeting too.

HumairAK commented 2 years ago

The operator nerc folks are trying: https://github.com/nerc-project/nerc-k8s-operators/tree/main/k8s/base/vault

dystewart commented 2 years ago

@larsks @HumairAK

So after trying to install the banzaicloud operator (via a new catalog added today https://github.com/operate-first/apps/pull/1640/files I'm running into more issues when actually trying to get vault deployed.

In this case the vault operator pod deploys and starts but with this error in the logs: leaderelection.go:325] error retrieving resource lock vault/vault-operator-lock: leases.coordination.k8s.io "vault-operator-lock" is forbidden: User "system:serviceaccount:vault:vault-operator" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "vault"

This version of the operator also allows us to create a vault cr but this doesn't actually lead to any vault pods being spun up I'm guessing because of the error with the operator?

dystewart commented 2 years ago

PR to add vault to smaug (Helm deployment for now, we will revisit the operator at a later date): https://github.com/operate-first/apps/pull/1712/files

dystewart commented 2 years ago

And here is the PR adding the external secrets operator to smaug: https://github.com/operate-first/apps/pull/1732

dystewart commented 2 years ago

The Vault PR above has been merged but we need PVCs in the infra cluster for vault to operate. Here is the issue to track this.

HumairAK commented 2 years ago

See: https://github.com/operate-first/apps/issues/1844

Let's go with smaug for now @dystewart

dystewart commented 2 years ago

@HumairAK Should I leave the instance in infra as is or should that be removed for now?

dystewart commented 2 years ago

PR adding vault to smaug: https://github.com/operate-first/apps/pull/1954

HumairAK commented 2 years ago

Vault / ESO added to smaug, acceptance criteria for this issue updated in original post

HumairAK commented 2 years ago

task done, docs pending https://github.com/operate-first/apps/issues/1992