openbao / openbao-helm

Helm chart to install OpenBao and other associated components.
Mozilla Public License 2.0
17 stars 8 forks source link

Issue 9: address failing tests in CI by fixing more references of vault to be openbao #11

Closed jessebot closed 6 months ago

jessebot commented 6 months ago

changes

Caveats

These acceptance tests can't be fully operational yet though, as we still need to have working openbao-k8s and openbao-csi-provider docker images.

jessebot commented 6 months ago

The current error we're getting in ci for this PR (this is after correcting vault to bao:

==> Logs of container openbao-zagktn68r5-server-test
------------------------------------------------------------------------------------------------------------------------
Checking for sealed info in 'bao status' output
Attempt 0...
Error checking seal status: Get "http://openbao-zagktn68r5.openbao-zagktn68r5.svc:8200/v1/sys/seal-status": dial tcp 10.96.239.182:8200: connect: connection refused

...<truncated for brevity>...

Attempt 9...
Error checking seal status: Get "http://openbao-zagktn68r5.openbao-zagktn68r5.svc:8200/v1/sys/seal-status": dial tcp 10.96.239.182:8200: connect: connection refused
timed out looking for sealed info in 'bao status' output

Also wanted to note that this job says it's still running, but it actually finished after about 5 minutes, because that's when the test times out: https://github.com/openbao/openbao-helm/actions/runs/9177571090/job/25235481635

update 1

a maintainer canceled the job :)

Update 2

this job will still complain until the openbao-k8s and openbao-csi-provider docker images are both available, so it's safe to wait on this PR for now.

Update 3

I need to do some more local testing on this, as now I'm not sure if it's failing because of openbao-k8s/openbao-csi-provider not being available as docker images or if it's failing because of the service name not being available?

Sleuthing... and looking at this failed job run:

 ==> Logs of container openbao-cvjh9yybky-0
------------------------------------------------------------------------------------------------------------------------
cp: cannot stat '/openbao/config/extraconfig-from-values.hcl': No such file or directory

Perhaps it's failing because it wanted to copy that file here:

Containers:
  vault:
    Container ID:  containerd://9872ca8838fb8970726286c64f431be7e2c6c1bd04788d62f3e02d052e82e961
    Image:         quay.io/openbao/openbao:2.0.0-alpha20240329
    Image ID:      quay.io/openbao/openbao@sha256:a015ae0adb1af5b45b33632e29879ff87063d0878e9359584a50b2706e500e9a
    Ports:         8200/TCP, 8201/TCP, 8202/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Command:
      /bin/sh
      -ec
    Args:
      cp /openbao/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
      [ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
      [ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
      [ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
      [ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
      [ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
      [ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
      /usr/local/bin/docker-entrypoint.sh bao server -config=/tmp/storageconfig.hcl 

which I think is declared here in the{{ template "vault.fullname" . }}-config ConfigMap: https://github.com/openbao/openbao-helm/blob/b59b6e55bb124e6486f861feeb15c2428096634b/charts/openbao/templates/server-config-configmap.yaml#L26-L27

why it's not available? I'm not sure. Need to sleuth further. I was able to locally do kubectl create ns openbao && ct install --namespace openbao --target-branch main and I got the following when checking the configmap with kubectl get cm openbao-6g0tg6wa8l-config -o yaml:

openbao-6g0tg6wa8l-config ConfigMap ```yaml apiVersion: v1 data: extraconfig-from-values.hcl: |2- disable_mlock = true ui = true listener "tcp" { tls_disable = 1 address = "[::]:8200 cluster_address = "[::]:8201 # Enable unauthenticated metrics access (necessary for Prometheus Operator) #telemetry { # unauthenticated_metrics_access = "true #} } storage "file" { path = "/vault/data } # Example configuration for using auto-unseal, using Google Cloud KMS. The # GKMS keys must already exist, and the cluster must have a service account # that is authorized to access GCP KMS. #seal "gcpckms" { # project = "vault-helm-dev # region = "global # key_ring = "vault-helm-unseal-kr # crypto_key = "vault-helm-unseal-key #} # Example configuration for enabling Prometheus metrics in your config. #telemetry { # prometheus_retention_time = "30s # disable_hostname = true #} ```

That |2- is kinda weird, but it doesn't seem like that's what broke it. sleuthing more...

jessebot commented 6 months ago

This should be good to go now, but as I said previously, we still need to move forward the other docker images before we can finish testing all angles of this helm chart for default functionality.