kubeshop / testkube

☸️ Kubernetes-native testing framework for test execution and orchestration
https://testkube.io
Other
1.31k stars 128 forks source link

Unable to install Testkube in OpenShift #3305

Closed upr-kmd closed 1 year ago

upr-kmd commented 1 year ago

Describe the bug Unable to install Testkube in OpenShift on Azure VMs I've used the instruction from documentation.

To Reproduce Steps to reproduce the behavior: Try to install testkube using instructions from the documentation. https://kubeshop.github.io/testkube/concepts/common-issues/#installation-on-openshift

Expected behavior Testkube is installed in OpenShift

Version / Cluster

Screenshots

pk@X:~$ helm list -A
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
dependency-track        cicd            4               2023-02-01 12:36:04.8333074 +0100 CET   deployed        dependency-track-1.5.5          4.6.3
prometheus-msteams      default         1               2022-11-24 13:11:30.283170519 +0000 UTC deployed        prometheus-msteams-1.3.3        v1.5.1
sonarqube               cicd            1               2023-02-01 12:44:47.1161273 +0100 CET   deployed        sonarqube-lts-1.0.31+406        8.9.10
pk@X:~$ cat values-tkmg.yaml
testkube-operator:
  webhook:
    migrate:
      enabled: true
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop: ["ALL"]

    patch:
      enabled: true
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000650000
        fsGroup: 1000650000

mongodb:
  securityContext:
    enabled: true
    fsGroup: 1000650001
    runAsUser: 1000650001
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: true
    runAsUser: 1000650001
    runAsNonRoot: true
  volumePermissions:
    enabled: false
  auth:
    enabled: false
pk@X:~$ helm install testkube kubeshop/testkube --create-namespace --namespace testkube --values values-tkmg.yaml --debug

install.go:194: [debug] Original chart version: ""
install.go:211: [debug] CHART PATH: /home/pk/.cache/helm/repository/testkube-1.9.86.tgz

client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ServiceAccount
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRole
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRoleBinding
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "webhook-cert-create" Job
client.go:133: [debug] creating 1 resource(s)
client.go:703: [debug] Watching for changes to Job webhook-cert-create with timeout of 5m0s
client.go:731: [debug] Add/Modify event for webhook-cert-create: ADDED
client.go:770: [debug] webhook-cert-create: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
helm.go:84: [debug] failed pre-install: timed out waiting for the condition
INSTALLATION FAILED
main.newInstallCmd.func2
        helm.sh/helm/v3/cmd/helm/install.go:141
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.6.1/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.6.1/command.go:1044
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.6.1/command.go:968
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:250
runtime.goexit
        runtime/asm_amd64.s:1571

Additional context First installation of testkube, I've never used it before.

upr-kmd commented 1 year ago

It also fails when using "testkube init":

$ testkube init --values values-tkmg.yaml --verbose --no-confirm
WELCOME TO

████████ ███████ ███████ ████████ ██   ██ ██    ██ ██████  ███████
   ██    ██      ██         ██    ██  ██  ██    ██ ██   ██ ██
   ██    █████   ███████    ██    █████   ██    ██ ██████  █████
   ██    ██           ██    ██    ██  ██  ██    ██ ██   ██ ██
   ██    ███████ ███████    ██    ██   ██  ██████  ██████  ███████
                                           /tɛst kjub/ by Kubeshop

✔ loading config file

Helm installing testkube framework
✔ updating helm repositories

Installing testkube (error: process error: exit status 1
output: Release "testkube" does not exist. Installing it now.
Error: failed pre-install: timed out waiting for the condition
)

pk@X:~$ helm list -A
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
dependency-track        cicd            4               2023-02-01 12:36:04.8333074 +0100 CET   deployed        dependency-track-1.5.5          4.6.3
prometheus-msteams      default         1               2022-11-24 13:11:30.283170519 +0000 UTC deployed        prometheus-msteams-1.3.3        v1.5.1
sonarqube               cicd            1               2023-02-01 12:44:47.1161273 +0100 CET   deployed        sonarqube-lts-1.0.31+406        8.9.10
testkube                testkube        1               2023-03-01 19:28:00.949623169 +0100 CET failed          testkube-1.9.88
pk@X:~$ helm -n testkube status testkube
NAME: testkube
LAST DEPLOYED: Wed Mar  1 19:28:00 2023
NAMESPACE: testkube
STATUS: failed
REVISION: 1
NOTES:
Enjoy testing with Testkube!
ypoplavs commented 1 year ago

Hello @upr-kmd, On what platform are you deploying TK? Is it an on-premise cluster or a Cloud Provider?

Could you please try listing jobs while install operation is in progress and check them check for errors? It may require security Context changes. kubectl get jobs -n testkube `kubectl describe job webhook-cert-create -n testkube'

upr-kmd commented 1 year ago

Our OpenShift is installed on Azure VMs.

$ kubectl get jobs -n testkube
NAME                  COMPLETIONS   DURATION   AGE
webhook-cert-create   0/1           10h        10h

$ kubectl describe job webhook-cert-create -n testkube

Pod Template:
  Labels:           app.kubernetes.io/component=admission-webhook
                    app.kubernetes.io/instance=testkube
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=testkube-operator
                    app.kubernetes.io/version=1.9.3
                    controller-uid=826901a7-6daa-44e7-85f2-afed8624625e
                    helm.sh/chart=testkube-operator-1.9.3
                    job-name=webhook-cert-create
  Service Account:  testkube-operator-webhook-cert-mgr
  Init Containers:
   migrate:
    Image:      docker.io/rancher/kubectl:v1.23.7
    Port:       <none>
    Host Port:  <none>
    Args:
      delete
      secret
      webhook-server-cert
      --namespace
      testkube
      --ignore-not-found
    Environment:  <none>
    Mounts:       <none>
  Containers:
   create:
    Image:      docker.io/dpejcev/kube-webhook-certgen:1.0.11
    Port:       <none>
    Host Port:  <none>
    Args:
      create
      --host
      testkube-operator-webhook-service.testkube,testkube-operator-webhook-service.testkube.svc
      --namespace
      testkube
      --secret-name
      webhook-server-cert
      --key-name
      tls.key
      --cert-name
      tls.crt
      --ca-name
      ca.crt
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type     Reason        Age                    From            Message
  ----     ------        ----                   ----            -------
  Warning  FailedCreate  3m47s (x171 over 10h)  job-controller  Error creating: pods "webhook-cert-create-" is forbidden: unable to validate against any security context constraint: [provider "sonarqube-privileged-scc": Forbidden: not usable by user or serviceaccount, provider "anyuid": Forbidden: not usable by user or serviceaccount, provider "pipelines-scc": Forbidden: not usable by user or serviceaccount, provider "stackrox-admission-control": Forbidden: not usable by user or serviceaccount, provider "stackrox-sensor": Forbidden: not usable by user or serviceaccount, provider "redis-enterprise-scc": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.fsGroup: Invalid value: []int64{1000650000}: 1000650000 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: Invalid value: 1000650000: must be in the ranges: [1000980000, 1000989999], spec.containers[0].securityContext.runAsUser: Invalid value: 1000650000: must be in the ranges: [1000980000, 1000989999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "noobaa": Forbidden: not usable by user or serviceaccount, provider "noobaa-endpoint": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "ocs-metrics-exporter": Forbidden: not usable by user or serviceaccount, provider "stackrox-collector": Forbidden: not usable by user or serviceaccount, provider "rook-ceph": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "rook-ceph-csi": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]
ypoplavs commented 1 year ago

Apparently, each cloud provider requires a different security context constraint. Our example was tested on GCP (i'll update the documentation). Please try setting a value in the ranges 1000980000, 1000989999 for the migrate and patch pods.


upr-kmd commented 1 year ago

Hi @ypoplavs. I've tried your workaround but it didn't work. The range of the allowed user and group ids changes, and the rest of the errors stays the same.

#2nd try
pk@X:~$ cat values-azure.yaml
testkube-operator:
  webhook:
    migrate:
      enabled: true
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop: ["ALL"]

    patch:
      enabled: true
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000980000
        fsGroup: 1000980000

mongodb:
  securityContext:
    enabled: true
    fsGroup: 1000980001
    runAsUser: 1000980001
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: true
    runAsUser: 1000980001
    runAsNonRoot: true
  volumePermissions:
    enabled: false
  auth:
    enabled: false
pk@X:~$ helm install testkube kubeshop/testkube --create-namespace --namespace testkube --values values-azure.yaml --debug
install.go:194: [debug] Original chart version: ""
install.go:211: [debug] CHART PATH: /home/pk/.cache/helm/repository/testkube-1.9.88.tgz

client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ServiceAccount
client.go:481: [debug] Ignoring delete failure for "testkube-operator-webhook-cert-mgr" /v1, Kind=ServiceAccount: serviceaccounts "testkube-operator-webhook-cert-mgr" not found
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRole
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRoleBinding
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "webhook-cert-create" Job
client.go:481: [debug] Ignoring delete failure for "webhook-cert-create" batch/v1, Kind=Job: jobs.batch "webhook-cert-create" not found
client.go:133: [debug] creating 1 resource(s)
client.go:703: [debug] Watching for changes to Job webhook-cert-create with timeout of 5m0s
client.go:731: [debug] Add/Modify event for webhook-cert-create: ADDED
client.go:770: [debug] webhook-cert-create: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:731: [debug] Add/Modify event for webhook-cert-create: MODIFIED
client.go:770: [debug] webhook-cert-create: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
helm.go:84: [debug] failed pre-install: timed out waiting for the condition
INSTALLATION FAILED
main.newInstallCmd.func2
        helm.sh/helm/v3/cmd/helm/install.go:141
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.6.1/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.6.1/command.go:1044
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.6.1/command.go:968
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:250
runtime.goexit
        runtime/asm_amd64.s:1571
pk@X:~$ kubectl describe job webhook-cert-create -n testkube
(...)
Events:
  Type     Reason        Age                 From            Message
  ----     ------        ----                ----            -------
  Warning  FailedCreate  55s (x4 over 105s)  job-controller  Error creating: pods "webhook-cert-create-" is forbidden: unable to validate against any security context constraint: [
provider "sonarqube-privileged-scc": Forbidden: not usable by user or serviceaccount, 
provider "anyuid": Forbidden: not usable by user or serviceaccount, 
provider "pipelines-scc": Forbidden: not usable by user or serviceaccount, 
provider "stackrox-admission-control": Forbidden: not usable by user or serviceaccount, 
provider "stackrox-sensor": Forbidden: not usable by user or serviceaccount, 
provider "redis-enterprise-scc": Forbidden: not usable by user or serviceaccount, 
provider restricted-v2: .spec.securityContext.fsGroup: 
Invalid value: []int64{1000980000}: 1000980000 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: 
Invalid value: 1000980000: must be in the ranges: [1000950000, 1000959999], spec.containers[0].securityContext.runAsUser: 
Invalid value: 1000980000: must be in the ranges: [1000950000, 1000959999], 
provider "restricted": Forbidden: not usable by user or serviceaccount, 
provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, 
provider "nonroot": Forbidden: not usable by user or serviceaccount, 
provider "noobaa": Forbidden: not usable by user or serviceaccount, 
provider "noobaa-endpoint": Forbidden: not usable by user or serviceaccount, 
provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, 
provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, 
provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, 
provider "hostnetwork": Forbidden: not usable by user or serviceaccount, 
provider "hostaccess": Forbidden: not usable by user or serviceaccount, 
provider "ocs-metrics-exporter": Forbidden: not usable by user or serviceaccount, 
provider "stackrox-collector": Forbidden: not usable by user or serviceaccount, 
provider "rook-ceph": Forbidden: not usable by user or serviceaccount, 
provider "node-exporter": Forbidden: not usable by user or serviceaccount, 
provider "rook-ceph-csi": Forbidden: not usable by user or serviceaccount, 
provider "privileged": Forbidden: not usable by user or serviceaccount]

#3rd try
$ cat values-azure2.yaml
testkube-operator:
  webhook:
    migrate:
      enabled: true
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop: ["ALL"]

    patch:
      enabled: true
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000950000
        fsGroup: 1000950000

mongodb:
  securityContext:
    enabled: true
    fsGroup: 1000950001
    runAsUser: 1000950001
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: true
    runAsUser: 1000950001
    runAsNonRoot: true
  volumePermissions:
    enabled: false
  auth:
    enabled: false

$ kubectl describe job webhook-cert-create -n testkube
(...)
Events:
  Type     Reason        Age                From            Message
  ----     ------        ----               ----            -------
  Warning  FailedCreate  11s (x3 over 21s)  job-controller  Error creating: pods "webhook-cert-create-" is forbidden: unable to validate against any security context constraint: [provider "sonarqube-privileged-scc": Forbidden: not usable by user or serviceaccount, provider "anyuid": Forbidden: not usable by user or serviceaccount, provider "pipelines-scc": Forbidden: not usable by user or serviceaccount, provider "stackrox-admission-control": Forbidden: not usable by user or serviceaccount, provider "stackrox-sensor": Forbidden: not usable by user or serviceaccount, provider "redis-enterprise-scc": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.fsGroup: Invalid value: []int64{1000950000}: 1000950000 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: Invalid value: 1000950000: must be in the ranges: [1000960000, 1000969999], spec.containers[0].securityContext.runAsUser: Invalid value: 1000950000: must be in the ranges: [1000960000, 1000969999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "noobaa": Forbidden: not usable by user or serviceaccount, provider "noobaa-endpoint": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "ocs-metrics-exporter": Forbidden: not usable by user or serviceaccount, provider "stackrox-collector": Forbidden: not usable by user or serviceaccount, provider "rook-ceph": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "rook-ceph-csi": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]

#4th try
$ cat values-azure3.yaml
testkube-operator:
  webhook:
    migrate:
      enabled: true
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop: ["ALL"]

    patch:
      enabled: true
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000960000
        fsGroup: 1000960000

mongodb:
  securityContext:
    enabled: true
    fsGroup: 1000650001
    runAsUser: 1000650001
  podSecurityContext:
    enabled: false
  containerSecurityContext:
    enabled: true
    runAsUser: 1000650001
    runAsNonRoot: true
  volumePermissions:
    enabled: false
  auth:
    enabled: false

$ kubectl describe job webhook-cert-create -n testkube
(...)
Events:
  Type     Reason        Age              From            Message
  ----     ------        ----             ----            -------
  Warning  FailedCreate  5s (x2 over 5s)  job-controller  Error creating: pods "webhook-cert-create-" is forbidden: unable to validate against any security context constraint: [provider "sonarqube-privileged-scc": Forbidden: not usable by user or serviceaccount, provider "anyuid": Forbidden: not usable by user or serviceaccount, provider "pipelines-scc": Forbidden: not usable by user or serviceaccount, provider "stackrox-admission-control": Forbidden: not usable by user or serviceaccount, provider "stackrox-sensor": Forbidden: not usable by user or serviceaccount, provider "redis-enterprise-scc": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.fsGroup: Invalid value: []int64{1000960000}: 1000960000 is not an allowed group, spec.initContainers[0].securityContext.runAsUser: Invalid value: 1000960000: must be in the ranges: [1000970000, 1000979999], spec.containers[0].securityContext.runAsUser: Invalid value: 1000960000: must be in the ranges: [1000970000, 1000979999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "noobaa": Forbidden: not usable by user or serviceaccount, provider "noobaa-endpoint": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "ocs-metrics-exporter": Forbidden: not usable by user or serviceaccount, provider "stackrox-collector": Forbidden: not usable by user or serviceaccount, provider "rook-ceph": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "rook-ceph-csi": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]

It looks to me that it's a problem with SCC.

$ oc -n testkube get job webhook-cert-create -o yaml | grep scc
(empty output)

$ oc -n testkube get job webhook-cert-create -o yaml | oc adm policy scc-subject-review -f -
RESOURCE                  ALLOWED BY
Job/webhook-cert-create   sonarqube-privileged-scc
upr-kmd commented 1 year ago

I've managed to install Testkube using a workaround suggested by one of our developers. It's probably not the most secure option. Testkube has installed but it doesn't fully start.

The workaround was:

pk@X:~$ oc create namespace testkube
namespace/testkube created
pk@X:~$ cat testkube-scc2.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: testkube-operator-webhook-cert-mgr
  namespace: testkube
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: testkube-use-privileged-scc
  namespace: testkube
rules:
  - apiGroups: ["security.openshift.io"]
    resources: ["securitycontextconstraints"]
    resourceNames: ["privileged"]
    verbs: ["use"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: testkube-use-privileged-scc
  namespace: testkube
subjects:
  - kind: ServiceAccount
    name: testkube-operator-webhook-cert-mgr
roleRef:
  kind: Role
  name: testkube-use-privileged-scc
  apiGroup: rbac.authorization.k8s.io
pk@X:~$ oc apply -f testkube-scc2.yaml
serviceaccount/testkube-operator-webhook-cert-mgr created
role.rbac.authorization.k8s.io/testkube-use-privileged-scc created
rolebinding.rbac.authorization.k8s.io/testkube-use-privileged-scc created

Installation logs:

pk@X:~$ helm install testkube kubeshop/testkube --create-namespace --namespace testkube --values values-azure3.yaml --debug
install.go:194: [debug] Original chart version: ""
install.go:211: [debug] CHART PATH: /home/pk/.cache/helm/repository/testkube-1.9.88.tgz

client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ServiceAccount
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRole
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRoleBinding
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "webhook-cert-create" Job
client.go:481: [debug] Ignoring delete failure for "webhook-cert-create" batch/v1, Kind=Job: jobs.batch "webhook-cert-create" not found
client.go:133: [debug] creating 1 resource(s)
client.go:703: [debug] Watching for changes to Job webhook-cert-create with timeout of 5m0s
client.go:731: [debug] Add/Modify event for webhook-cert-create: ADDED
client.go:770: [debug] webhook-cert-create: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:731: [debug] Add/Modify event for webhook-cert-create: MODIFIED
client.go:770: [debug] webhook-cert-create: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:731: [debug] Add/Modify event for webhook-cert-create: MODIFIED
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ServiceAccount
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRole
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRoleBinding
client.go:477: [debug] Starting delete for "webhook-cert-create" Job
client.go:133: [debug] creating 66 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ServiceAccount
client.go:481: [debug] Ignoring delete failure for "testkube-operator-webhook-cert-mgr" /v1, Kind=ServiceAccount: serviceaccounts "testkube-operator-webhook-cert-mgr" not found
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRole
client.go:481: [debug] Ignoring delete failure for "testkube-operator-webhook-cert-mgr" rbac.authorization.k8s.io/v1, Kind=ClusterRole: clusterroles.rbac.authorization.k8s.io "testkube-operator-webhook-cert-mgr" not found
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRoleBinding
client.go:481: [debug] Ignoring delete failure for "testkube-operator-webhook-cert-mgr" rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding: clusterrolebindings.rbac.authorization.k8s.io "testkube-operator-webhook-cert-mgr" not found
client.go:133: [debug] creating 1 resource(s)
client.go:477: [debug] Starting delete for "webhook-cert-patch" Job
client.go:481: [debug] Ignoring delete failure for "webhook-cert-patch" batch/v1, Kind=Job: jobs.batch "webhook-cert-patch" not found
client.go:133: [debug] creating 1 resource(s)
client.go:703: [debug] Watching for changes to Job webhook-cert-patch with timeout of 5m0s
client.go:731: [debug] Add/Modify event for webhook-cert-patch: ADDED
client.go:770: [debug] webhook-cert-patch: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:731: [debug] Add/Modify event for webhook-cert-patch: MODIFIED
client.go:770: [debug] webhook-cert-patch: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:731: [debug] Add/Modify event for webhook-cert-patch: MODIFIED
client.go:770: [debug] webhook-cert-patch: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:731: [debug] Add/Modify event for webhook-cert-patch: MODIFIED
client.go:770: [debug] webhook-cert-patch: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:731: [debug] Add/Modify event for webhook-cert-patch: MODIFIED
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ServiceAccount
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRole
client.go:477: [debug] Starting delete for "testkube-operator-webhook-cert-mgr" ClusterRoleBinding
client.go:477: [debug] Starting delete for "webhook-cert-patch" Job
NAME: testkube
LAST DEPLOYED: Mon Mar  6 18:55:12 2023
NAMESPACE: testkube
STATUS: deployed
REVISION: 1
USER-SUPPLIED VALUES:
mongodb:
  auth:
    enabled: false
  containerSecurityContext:
    enabled: true
    runAsNonRoot: true
    runAsUser: 1000650001
  podSecurityContext:
    enabled: false
  securityContext:
    enabled: true
    fsGroup: 1000650001
    runAsUser: 1000650001
  volumePermissions:
    enabled: false
testkube-operator:
  webhook:
    migrate:
      enabled: true
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
          - ALL
    patch:
      enabled: true
      securityContext:
        fsGroup: 1000960000
        runAsNonRoot: true
        runAsUser: 1000960000

COMPUTED VALUES:

          imageRegistry: ""
          labels: {}
        imagePullSecrets: []
        imageRegistry: ""
        labels: {}
      imagePullSecrets: []
      imageRegistry: ""
      labels: {}
    imagePullSecrets: []
    imageRegistry: ""
    labels: {}
  imagePullSecrets: []
  imageRegistry: ""
  labels: {}
mongodb:
  affinity: {}
  annotations: {}
  arbiter:
    affinity: {}
    annotations: {}
    args: []
    command: []
    configuration: ""
    containerPorts:
      mongodb: 27017
    containerSecurityContext:
      enabled: true
      runAsNonRoot: true
      runAsUser: 1001
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    enabled: true
    existingConfigmap: ""
    extraEnvVars: []
    extraEnvVarsCM: ""
    extraEnvVarsSecret: ""
    extraFlags: []
    extraVolumeMounts: []
    extraVolumes: []
    hostAliases: []
    initContainers: []
    labels: {}
    lifecycleHooks: {}
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 30
      periodSeconds: 20
      successThreshold: 1
      timeoutSeconds: 10
    nodeAffinityPreset:
      key: ""
      type: ""
      values: []
    nodeSelector: {}
    pdb:
      create: false
    enabled: true
    runAsNonRoot: true
    runAsUser: 1000650001
  customLivenessProbe: {}
  customReadinessProbe: {}
  customStartupProbe: {}
  diagnosticMode:
    args:
    - infinity
    command:
    - sleep
    enabled: false
  directoryPerDB: false
  disableJavascript: false
  disableSystemLog: false
  enableIPv6: false
  enableJournal: true
  enabled: true
  existingConfigmap: ""
  externalAccess:
    autoDiscovery:
      enabled: false
      image:
        pullPolicy: IfNotPresent
        pullSecrets: []
        registry: docker.io
        repository: bitnami/kubectl
        tag: 1.24.3-debian-11-r7
      resources:
        limits: {}
        requests: {}
    enabled: false
    hidden:
      enabled: false
      service:
        annotations: {}
        domain: ""
        externalTrafficPolicy: Local
        extraPorts: []
        loadBalancerIPs: []
        loadBalancerSourceRanges: []
        nodePorts: []
        portName: mongodb
        ports:
          mongodb: 27017
        sessionAffinity: None
        sessionAffinityConfig: {}
        type: LoadBalancer
    service:
      annotations: {}
      domain: ""
      externalTrafficPolicy: Local
      extraPorts: []
      loadBalancerIPs: []
      loadBalancerSourceRanges: []
      nodePorts: []
      portName: mongodb

    schedulerName: ""
    service:
      annotations: {}
      extraPorts: []
      portName: mongodb
      ports:
        mongodb: 27017
    sidecars: []
    startupProbe:
      enabled: false
      failureThreshold: 30
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    terminationGracePeriodSeconds: ""
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      type: RollingUpdate
  hostAliases: []
  image:
    debug: false
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: zcube/bitnami-compat-mongodb
    tag: 5.0.10-debian-11-r19
  initContainers: []
  initdbScripts: {}
  initdbScriptsConfigMap: ""
  kubeVersion: ""
  labels: {}
  lifecycleHooks: {}
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 30
    periodSeconds: 20
    successThreshold: 1
    timeoutSeconds: 10
  metrics:
    args: []
    command: []
    containerPort: 9216
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    enabled: false
    extraFlags: ""
    image:
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/mongodb-exporter
      tag: 0.33.0-debian-11-r9
    livenessProbe:
      enabled
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    nameOverride: ""
    nodePort: true
    nodePorts:
      mongodb: ""
    port: "27017"
    portName: mongodb
    ports:
      mongodb: 27017
    sessionAffinity: None
    sessionAffinityConfig: {}
    type: ClusterIP
  serviceAccount:
    annotations: {}
    automountServiceAccountToken: true
    create: true
    name: ""
  sidecars: []
  startupProbe:
    enabled: false
    failureThreshold: 30
    initialDelaySeconds: 5
    periodSeconds: 20
    successThreshold: 1
    timeoutSeconds: 10
  systemLogVerbosity: 0
  terminationGracePeriodSeconds: ""
  tls:
    autoGenerated: true
    caCert: ""
    caKey: ""
    enabled: false
    existingSecret: ""
    extraDnsNames: []
    image:
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/nginx
      tag: 1.23.1-debian-11-r4
    mode: requireTLS
    resources:
      limits: {}
      requests: {}
  tolerations:
  - effect: NoSchedule
    key: kubernetes.io/arch
    operator: Equal
    value: arm64
  topologySpreadConstraints: []
  updateStrategy:
    type: RollingUpdate
  useStatefulSet: false
  volumePermissions:
    enabled: false
    image:
      pullPolicy: IfNot
        timeoutSeconds: 5
    hostNetwork: false
    image:
      pullPolicy: IfNotPresent
      repository: nats
      tag: 2.9.8-alpine
    jetstream:
      enabled: false
      fileStorage:
        accessModes:
        - ReadWriteOnce
        enabled: true
        size: 10Gi
        storageDirectory: /data
      memStorage:
        enabled: true
        size: 1Gi
    limits:
      lameDuckDuration: 30s
      lameDuckGracePeriod: 10s
    logging: {}
    profiling:
      enabled: false
      port: 6000
    resources: {}
    securityContext: {}
    selectorLabels: {}
    serverNamePrefix: ""
    serviceAccount:
      annotations: {}
      create: true
      name: ""
    terminationGracePeriodSeconds: 60
  natsbox:
    additionalLabels: {}
    affinity: {}
    enabled: true
    extraVolumeMounts: []
    extraVolumes: []
    image:
      pullPolicy: IfNotPresent
      repository: natsio/nats-box
      tag: 0.13.2
    imagePullSecrets: []
    nodeSelector: {}
    podAnnotations: {}
    podLabels: {}
    securityContext: {}
    tolerations:
    - effect: NoSchedule
      key: kubernetes.io/arch
      operator: Equal
      value: arm64
  networkPolicy:
    allowExternal: true
    enabled: false
    extraEgress: []
    extraIngress:
    serviceAccountName: ""
    storage: 10Gi
    tolerations:
    - effect: NoSchedule
      key: kubernetes.io/arch
      operator: Equal
      value: arm64
  mongodb:
    allowDiskUse: true
    dsn: mongodb://testkube-mongodb:27017
  nameOverride: api-server
  nats:
    enabled: true
    uri: nats://testkube-nats
  nodeSelector: {}
  podAnnotations: {}
  podLabels: {}
  podSecurityContext: {}
  prometheus:
    enabled: false
    interval: 15s
    monitoringLabels: {}
  rbac:
    create: true
    createRoleBindings: true
    createRoles: true
  replicaCount: 1
  resources:
    requests:
      cpu: 200m
      memory: 200Mi
  securityContext: {}
  service:
    annotations: {}
    labels: {}
    port: 8088
    type: ClusterIP
  serviceAccount:
    annotations: {}
    create: true
    name: ""
  slackConfig: ""
  slackSecret: ""
  slackTemplate: ""
  slackToken: ""
  storage:
    SSL: false
    accessKey: minio123
    accessKeyId: minio
    bucket: testkube-artifacts
    endpoint: ""
    endpoint_port: "9000"
    location: ""
    scrapperEnabled: true
    token: ""
  testConnection:
    enabled: true
    re
    key: kubernetes.io/arch
    operator: Equal
    value: arm64
testkube-operator:
  affinity: {}
  apiFullname: testkube-api-server
  apiPort: 8088
  containerSecurityContext:
    allowPrivilegeEscalation: false
  extraEnvVars: []
  fullnameOverride: testkube-operator
  global:
    annotations: {}
    exampleValue: global-chart
    global:
      annotations: {}
      exampleValue: global-chart
      global:
        annotations: {}
        exampleValue: global-chart
        global:
          annotations: {}
          imagePullSecrets: []
          imageRegistry: ""
          labels: {}
        imagePullSecrets: []
        imageRegistry: ""
        labels: {}
      imagePullSecrets: []
      imageRegistry: ""
      labels: {}
    imagePullSecrets: []
    imageRegistry: ""
    labels: {}
  image:
    pullPolicy: ""
    registry: docker.io
    repository: kubeshop/testkube-operator
  initialDelaySeconds: 15
  installCRD: true
  kubeVersion: ""
  livenessProbePort: 8081
  metricsServiceName: ""
  nameOverride: testkube-operator
  nodeSelector: {}
  periodSeconds: 20
  proxy:
    image:
      pullPolicy: Always
      registry: gcr.io
      repository: kubebuilder/kube-rbac-proxy
      tag: v0.8.0
    resources: {}
  rbac:
    create: true
  readinessProbePort: 8081
  readinessProbeinitialDelaySeconds: 5
  readinessProbeperiodSec
apiVersion: v1
kind: ServiceAccount
metadata:
  name: "mongodb-upgrade"
  labels:
    app.kubernetes.io/component: mongodb
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/name: mongodb-upgrade
  annotations:
    "helm.sh/hook": pre-upgrade,post-upgrade
    "helm.sh/hook-weight": "4"
    "helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
---
# Source: testkube/charts/testkube-operator/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
  labels:
    app.kubernetes.io/version: "1.9.3"
    helm.sh/chart: testkube-operator-1.9.3
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: testkube-operator
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/component: admission-webhook
  name: testkube-operator-webhook-cert-mgr
  namespace: testkube
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - list
  - create
  - delete
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - mutatingwebhookconfigurations
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
- apiGroups:
  - "apiextensions.k8s.io"
  resources:
  - customresourcedefinitions
  verbs:
  - get
  - list
  - update
---
# Source: testkube/charts/testku
    - |
      "&&"
    - |
      name=$(nats request -s nats://$NATS_HOST:4222 name.test '' 2>/dev/null)
    - |
      "&&"
    - |
      [ $name = test ]

  restartPolicy: Never
---
# Source: testkube/charts/testkube-api/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "testkube-api-server-test-connection"
  labels:
    helm.sh/chart: testkube-api-1.9.22
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['testkube-api-server:8088']
  restartPolicy: Never
  tolerations:
    - effect: NoSchedule
      key: kubernetes.io/arch
      operator: Equal
      value: arm64
---
# Source: testkube/charts/testkube-dashboard/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "testkube-dashboard-test-connection"
  labels:
    helm.sh/chart: testkube-dashboard-1.9.2-beta1
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['testkube-dashboard:8080']
  restartPolicy: Never
  tolerations:
    - effect: NoSchedule
      key: kubernetes.io/arch
      operator: Equal
      value: arm64
---
# Source: testkube/charts/testkube-operator/templates/tests/test-conn
            - --namespace
            - testkube
            - --secret-name
            - webhook-server-cert
            - --key-name
            - tls.key
            - --cert-name
            - tls.crt
            - --ca-name
            - ca.crt
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
      restartPolicy: OnFailure
      serviceAccountName: testkube-operator-webhook-cert-mgr
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        - effect: NoSchedule
          key: kubernetes.io/arch
          operator: Equal
          value: arm64
      securityContext:
        fsGroup: 1000960000
        runAsNonRoot: true
        runAsUser: 1000960000
  backoffLimit: 1
---
# Source: testkube/charts/testkube-operator/templates/webhook-cert-patch.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: webhook-cert-patch
  annotations:
    "helm.sh/hook": post-install,post-upgrade
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
  labels:
    app.kubernetes.io/version: "1.9.3"
    helm.sh/chart: testkube-operator-1.9.3
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: testkube-operator
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/version: "1.9.3"
        helm.sh/chart: testkube-operator-1.9.3
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: testkube-operator
        app.kubernetes.io/instance: testkube
        app.kubernetes.io/component: admission-webhook
    spec:
      cont
          operator: Equal
          value: arm64
MANIFEST:
---
# Source: testkube/charts/nats/templates/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: testkube-nats
  namespace: testkube
  labels:
    helm.sh/chart: nats-0.19.1
    app.kubernetes.io/name: nats
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/version: "2.9.8"
    app.kubernetes.io/managed-by: Helm
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nats
      app.kubernetes.io/instance: testkube
---
# Source: testkube/charts/mongodb/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: testkube-mongodb
  namespace: "testkube"
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-12.1.31
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/managed-by: Helm
secrets:
  - name: testkube-mongodb
automountServiceAccountToken: true
---
# Source: testkube/charts/nats/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: testkube-nats
  namespace: testkube
  labels:
    helm.sh/chart: nats-0.19.1
    app.kubernetes.io/name: nats
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/version: "2.9.8"
    app.kubernetes.io/managed-by: Helm
---
# Source: testkube/charts/testkube-api/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: testkube-api-server
  labels:
    helm.sh/chart: testkube-a
    app.kubernetes.io/name: nats
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/version: "2.9.8"
    app.kubernetes.io/managed-by: Helm
data:
  nats.conf: |
    # NATS Clients Port
    port: 4222

    # PID file shared with configuration reloader.
    pid_file: "/var/run/nats/nats.pid"

    ###############
    #             #
    # Monitoring  #
    #             #
    ###############
    http: 8222
    server_name:$POD_NAME
    lame_duck_grace_period: 10s
    lame_duck_duration: 30s
---
# Source: testkube/charts/testkube-operator/templates/configmap.yaml
apiVersion: v1
data:
  controller_manager_config.yaml: |
    apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
    kind: ControllerManagerConfig
    health:
      healthProbeBindAddress: :8081
    metrics:
      bindAddress: 127.0.0.1:8080
    webhook:
      port: 9443
    leaderElection:
      leaderElect: true
      resourceName: 47f0dfc1.testkube.io
kind: ConfigMap
metadata:
  name: testkube-operator-manager-config
  namespace: testkube
---
# Source: testkube/charts/mongodb/templates/standalone/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: testkube-mongodb
  namespace: "testkube"
  labels:
    app.kubernetes.io/name: mongodb
    helm.sh/chart: mongodb-12.1.31
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: mongodb
spec:
  accessModes:
    - "ReadWriteOnce"
  r
            kind:
              description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: ExecutorSpec defines the desired state of Executor
              properties:
                args:
                  description: container executor binary arguments
                  items:
                    type: string
                  type: array
                command:
                  description: container executor default binary command
                  items:
                    type: string
                  type: array
                content_types:
                  description: ContentTypes list of handled content types
                  items:
                    enum:
                      - string
                      - file-uri
                      - git-file
                      - git-dir
                    type: string
                  type: array
                executor_type:
                  description: ExecutorType one of "rest" for rest openapi based executors
                    or "job" which will be default runners for testkube or "container"
                    for container executors
                  type: string
                features:
                  description: Features list of possible features which executor handles
                  items:
                    enum:
                      - artifacts
                      - junit-report
                    type: string
                  type: array
                image:
                  description: Image for kube-job
                  type: string
                imagePullSecrets:
                  description: container executor default image pull secrets
                  items:
                    description: LocalObjectReference contains enough information to
                      let you locate the referenced object inside the same namespace.
                    properties:
                      name:
                        description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
                        TODO: Add other useful fields. apiVersion, kind, uid?'
                        type: string
                    type: object
                  t
    singular: webhook
  scope: Namespaced
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        description: Webhook is the Schema for the webhooks API
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
            of an object. Servers should convert recognized schemas to the latest
            internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
            object represents. Servers may infer this from the endpoint the client
            submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: WebhookSpec defines the desired state of Webhook
            properties:
              events:
                description: Events declare list if events on which webhook should
                  be called
                items:
                  type: string
                type: array
              selector:
                description: Labels to filter for tests and test suites
                type: string
              uri:
                description: Uri is address where webhook should be made
                type: string
            type: object
          status:
            description: WebhookStatus defines the observed state of Webhook
            type: object
        type: object
    served: true
    storage: true
    subresources:
      status: {}
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []
---
# Source: testkube/charts/testkube-operator/templates/tests.testkube.io_scripts.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.8.0
  name: scripts.tests.testku
                  from file, - git repo directory checkout in case when test is some
                  kind of project or have more than one file,'
                  type: string
                name:
                  description: script execution custom name
                  type: string
                params:
                  additionalProperties:
                    type: string
                  description: execution params passed to executor
                  type: object
                repository:
                  description: repository details if exists
                  properties:
                    branch:
                      description: branch/tag name for checkout
                      type: string
                    path:
                      description: if needed we can checkout particular path (dir or
                        file) in case of BIG/mono repositories
                      type: string
                    token:
                      description: git auth token for private repositories
                      type: string
                    type:
                      description: Type_ repository type
                      type: string
                    uri:
                      description: Uri of content file or git directory
                      type: string
                    username:
                      description: git auth username for private repositories
                      type: string
                  required:
                    - branch
                    - type
                    - uri
                  type: object
                tags:
                  items:
                    type: string
                  type: array
                type:
                  description: script type
                  type: string
              type: object
            status:
              description: ScriptStatus defines the observed state of Script
              properties:
                executions_count:
                  type: integer
                last_execution:
                  format: date-time
                  type: string
              type: object
          type: object
      served: true
      storage:
                          type: string
                        uri:
                          description: uri of content file or git directory
                          type: string
                        username:
                          description: git auth username for private repositories
                          type: string
                      required:
                        - branch
                        - type
                        - uri
                      type: object
                    type:
                      description: script type
                      type: string
                    uri:
                      description: uri of script content
                      type: string
                  type: object
                name:
                  description: script execution custom name
                  type: string
                params:
                  additionalProperties:
                    type: string
                  description: execution params passed to executor
                  type: object
                tags:
                  description: script tags
                  items:
                    type: string
                  type: array
                type:
                  description: script type
                  type: string
              type: object
            status:
              description: ScriptStatus defines the observed state of Script
              properties:
                executions_count:
                  type: integer
                last_execution:
                  format: date-time
                  type: string
              type: object
          type: object
      served: true
      storage: true
      subresources:
        status: {}
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []
---
# Source: testkube/charts/testkube-operator/t
                  description: After steps is list of scripts which will be sequentially
                    orchestrated
                  items:
                    description: TestStepSpec will of particular type will have config
                      for possible step types
                    properties:
                      delay:
                        properties:
                          duration:
                            description: Duration in ms
                            format: int32
                            type: integer
                        type: object
                      execute:
                        properties:
                          name:
                            type: string
                          namespace:
                            type: string
                          stopOnFailure:
                            type: boolean
                        type: object
                      type:
                        type: string
                    type: object
                  type: array
                before:
                  description: Before steps is list of scripts which will be sequentially
                    orchestrated
                  items:
                    description: TestStepSpec will of particular type will have config
                      for possible step types
                    properties:
                      delay:
                        properties:
                          duration:
                            description: Duration in ms
                            format: int32
                            type: integer
                        type: object
                      execute:
                        properties:
                          name:
                            type: string
                          namespace:
                            type: string
                          stopOnFailure:
                            type: boolean
                        type: object
                      type:
                        type: string
                    type: object
                  type: array
                description:
                  type: string
                repeats:
                  type: integer

      served: true
      storage: false
      subresources:
        status: {}
    - name: v2
      schema:
        openAPIV3Schema:
          description: Test is the Schema for the tests API
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: TestSpec defines the desired state of Test
              properties:
                content:
                  description: test content object
                  properties:
                    data:
                      description: test content body
                      type: string
                    repository:
                      description: repository of test content
                      properties:
                        branch:
                          description: branch/tag name for checkout
                          type: string
                        commit:
                          description: commit id (sha) for checkout
                          type: string
                        path:
                          description: if needed we can checkout particular path (dir
                            or file) in case of BIG/mono repositories
                          type: string
                        token:
                          description: git auth token for private repositories
                          type: string
                        type:
                          description: VCS repository type
                          type: string
                        uri:
                          description: uri of content file or git directory
                          type: string
                        username:
                          description: git auth username for private repositories
                          type: string
                      required:
                        - type

      - name: metrics
        image: natsio/prometheus-nats-exporter:0.10.1
        imagePullPolicy: IfNotPresent
        resources:
          {}
        args:
        - -connz
        - -routez
        - -subz
        - -varz
        - -prefix=nats
        - -use_internal_server_id
        - http://localhost:8222/
        ports:
        - containerPort: 7777
          name: metrics

  volumeClaimTemplates:
---
# Source: testkube/charts/testkube-operator/templates/webhook.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: testkube-operator-webhook-admission
  annotations:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/version: "1.9.3"
    helm.sh/chart: testkube-operator-1.9.3
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: testkube-operator
    app.kubernetes.io/instance: testkube
webhooks:
- admissionReviewVersions:
  - v1
  - v1beta1
  clientConfig:
    service:
      name: testkube-operator-webhook-service
      namespace: testkube
      path: /validate-tests-testkube-io-v1-testtrigger
  failurePolicy: Fail
  name: vtesttrigger.kb.io
  rules:
  - apiGroups:
    - tests.testkube.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - testtriggers
  sideEffects: None

NOTES:
Enjoy testing with Testkube!

This is how the webhook-cert-create job looked like:

pk@X:~$ oc -n testkube get job webhook-cert-create -o yaml
apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  creationTimestamp: "2023-03-06T17:55:17Z"
  generation: 1
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: testkube
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: testkube-operator
    app.kubernetes.io/version: 1.9.3
    helm.sh/chart: testkube-operator-1.9.3
  name: webhook-cert-create
  namespace: testkube
  resourceVersion: "426801118"
  uid: 8b35ff62-555d-48de-acc4-da6bd8547b5d
spec:
  backoffLimit: 1
  completionMode: NonIndexed
  completions: 1
  parallelism: 1
  selector:
    matchLabels:
      controller-uid: 8b35ff62-555d-48de-acc4-da6bd8547b5d
  suspend: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: testkube
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: testkube-operator
        app.kubernetes.io/version: 1.9.3
        controller-uid: 8b35ff62-555d-48de-acc4-da6bd8547b5d
        helm.sh/chart: testkube-operator-1.9.3
        job-name: webhook-cert-create
    spec:
      containers:
      - args:
        - create
        - --host
        - testkube-operator-webhook-service.testkube,testkube-operator-webhook-service.testkube.svc
        - --namespace
        - testkube
        - --secret-name
        - webhook-server-cert
        - --key-name
        - tls.key
        - --cert-name
        - tls.crt
        - --ca-name
        - ca.crt
        image: docker.io/dpejcev/kube-webhook-certgen:1.0.11
        imagePullPolicy: Always
        name: create
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      initContainers:
      - args:
        - delete
        - secret
        - webhook-server-cert
        - --namespace
        - testkube
        - --ignore-not-found
        image: docker.io/rancher/kubectl:v1.23.7
        imagePullPolicy: Always
        name: migrate
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000960000
        runAsNonRoot: true
        runAsUser: 1000960000
      serviceAccount: testkube-operator-webhook-cert-mgr
      serviceAccountName: testkube-operator-webhook-cert-mgr
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: kubernetes.io/arch
        operator: Equal
        value: arm64
status:
  active: 1
  ready: 0
  startTime: "2023-03-06T17:55:17Z"

The API pod doesn't start.

pk@X:~$ oc get po -n testkube
NAME                                                    READY   STATUS             RESTARTS      AGE
testkube-api-server-567578c7dc-4hsd9                    0/1     CrashLoopBackOff   6 (80s ago)   7m54s
testkube-dashboard-9d6c9d554-zq8sj                      1/1     Running            0             7m54s
testkube-minio-testkube-56d755b79c-j8kbt                1/1     Running            0             7m54s
testkube-nats-0                                         3/3     Running            0             7m54s
testkube-nats-box-55f77d7545-5q7zl                      1/1     Running            0             7m54s
testkube-operator-controller-manager-86467497bb-5s46c   2/2     Running            0             7m54s

pk@X:~$ oc -n testkube describe pod testkube-api-server-567578c7dc-4hsd9
(...)

Events:
  Type     Reason          Age                    From               Message
  ----     ------          ----                   ----               -------
  Normal   Scheduled       4m27s                  default-scheduler  Successfully assigned testkube/testkube-api-server-567578c7dc-4hsd9 to dev-jbrr5-worker-westeurope2-6h7dt by dev-jbrr5-master-0
  Normal   AddedInterface  4m24s                  multus             Add eth0 [10.130.7.37/23] from ovn-kubernetes
  Normal   Pulled          4m2s                   kubelet            Successfully pulled image "docker.io/kubeshop/testkube-api-server:1.9.22" in 22.000923935s
  Normal   Killing         3m27s                  kubelet            Container testkube-api failed liveness probe, will be restarted
  Normal   Pulling         3m19s (x2 over 4m24s)  kubelet            Pulling image "docker.io/kubeshop/testkube-api-server:1.9.22"
  Normal   Pulled          3m18s                  kubelet            Successfully pulled image "docker.io/kubeshop/testkube-api-server:1.9.22" in 1.121381237s
  Normal   Created         3m17s (x2 over 3m49s)  kubelet            Created container testkube-api
  Normal   Started         3m17s (x2 over 3m48s)  kubelet            Started container testkube-api
  Warning  Unhealthy       3m7s (x12 over 3m47s)  kubelet            Readiness probe failed: Get "http://10.130.7.37:8088/health": dial tcp 10.130.7.37:8088: connect: connection refused
  Warning  Unhealthy       3m7s (x4 over 3m47s)   kubelet            Liveness probe failed: Get "http://10.130.7.37:8088/health": dial tcp 10.130.7.37:8088: connect: connection refused
pk@X:~$ oc -n testkube logs testkube-api-server-567578c7dc-4hsd9
I0306 17:59:50.501565       1 request.go:682] Waited for 1.015144213s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
{"level":"info","ts":1678125612.8696992,"caller":"slack/slack.go:42","msg":"initializing slack client","SLACK_TOKEN":""}
{"level":"info","ts":1678125612.8705492,"caller":"v1/server.go:342","msg":"dashboard uri","uri":"http://testkube-dashboard"}
{"level":"info","ts":1678125612.8739812,"caller":"api-server/main.go:330","msg":"setting minio as logs storage"}
{"level":"info","ts":1678125612.8742042,"caller":"api-server/main.go:372","msg":"starting trigger service"}
{"level":"error","ts":1678125612.8743222,"caller":"v1/server.go:164","msg":"error getting config map","error":"reading config map error: client rate limiter Wait returned an error: context canceled","errorVerbose":"client rate limiter Wait returned an error: context canceled\nreading config map error\ngithub.com/kubeshop/testkube/pkg/repository/config.(*ConfigMapConfig).Get\n\t/build/pkg/repository/config/configmap.go:59\ngithub.com/kubeshop/testkube/pkg/repository/config.(*ConfigMapConfig).GetTelemetryEnabled\n\t/build/pkg/repository/config/configmap.go:47\ngithub.com/kubeshop/testkube/internal/app/api/v1.TestkubeAPI.SendTelemetryStartEvent\n\t/build/internal/app/api/v1/server.go:162\nmain.main\n\t/build/cmd/api-server/main.go:376\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571","stacktrace":"github.com/kubeshop/testkube/internal/app/api/v1.TestkubeAPI.SendTelemetryStartEvent\n\t/build/internal/app/api/v1/server.go:164\nmain.main\n\t/build/cmd/api-server/main.go:376\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
{"level":"info","ts":1678125612.8745067,"caller":"triggers/lease.go:12","msg":"trigger service: waiting for lease"}
{"level":"info","ts":1678125612.8746495,"caller":"triggers/scraper.go:17","msg":"trigger service: stopping scraper component"}
{"level":"info","ts":1678125612.8746681,"caller":"triggers/watcher.go:85","msg":"trigger service: stopping watcher component: context finished"}
{"level":"info","ts":1678125612.879062,"caller":"event/emitter.go:154","msg":"starting listener","websocket.allevents":{"clients":"[]","events":"[start-test end-test-success end-test-failed end-test-aborted end-test-timeout start-testsuite end-testsuite-success end-testsuite-failed end-testsuite-aborted end-testsuite-timeout]","name":"websocket.allevents","selector":""}}
{"level":"info","ts":1678125612.8791866,"caller":"event/emitter.go:154","msg":"starting listener","slack":{"events":"[start-test end-test-success end-test-failed end-test-aborted end-test-timeout start-testsuite end-testsuite-success end-testsuite-failed end-testsuite-aborted end-testsuite-timeout]","name":"slack","selector":""}}
{"level":"info","ts":1678125613.5725126,"caller":"api-server/main.go:379","msg":"starting Testkube API server","telemetryEnabled":true,"clusterId":"clusterc5f88be5b49da74d1411525a627145f9","namespace":"testkube","version":"v1.9.22"}
{"level":"error","ts":1678125613.5726633,"caller":"v1/server.go:354","msg":"error getting config map","error":"reading config map error: client rate limiter Wait returned an error: context canceled","errorVerbose":"client rate limiter Wait returned an error: context canceled\nreading config map error\ngithub.com/kubeshop/testkube/pkg/repository/config.(*ConfigMapConfig).Get\n\t/build/pkg/repository/config/configmap.go:59\ngithub.com/kubeshop/testkube/pkg/repository/config.(*ConfigMapConfig).GetTelemetryEnabled\n\t/build/pkg/repository/config/configmap.go:47\ngithub.com/kubeshop/testkube/internal/app/api/v1.TestkubeAPI.StartTelemetryHeartbeats.func1\n\t/build/internal/app/api/v1/server.go:352\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571","stacktrace":"github.com/kubeshop/testkube/internal/app/api/v1.TestkubeAPI.StartTelemetryHeartbeats.func1\n\t/build/internal/app/api/v1/server.go:354"}
{"level":"info","ts":1678125615.5728045,"caller":"server/httpserver.go:103","msg":"shutting down Testkube API server"}
{"level":"fatal","ts":1678125615.5729144,"caller":"api-server/main.go:392","msg":"Testkube is shutting down: received signal: terminated","stacktrace":"main.main\n\t/build/cmd/api-server/main.go:392\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
ypoplavs commented 1 year ago

Hi @upr-kmd, I don't see Mongo in the list of pods. It's necessary for the API to start. You can try to describe the deployment testkube-mongodb and check for errors.

ypoplavs commented 1 year ago

Have you tried setting the range for the security context what was in the error [1000970000, 1000979999]? We might avoid giving privileged access that way. We had an issue like this before and setting the value accordingly to the output fixed it.

upr-kmd commented 1 year ago

Hi. We tried some manual workarounds but it still fails.

pk@X:~/Dokumenter/OpenShift/ad-config$ oc -n testkube describe pod testkube-mongodb-795bc78444-f89q5
Name:             testkube-mongodb-795bc78444-f89q5
Namespace:        testkube
Priority:         0
Service Account:  testkube-mongodb
Node:             dev-jbrr5-worker-westeurope2-6h7dt/10.0.0.39
Start Time:       Tue, 07 Mar 2023 12:44:33 +0100
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=testkube
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mongodb
                  helm.sh/chart=mongodb-12.1.31
                  pod-template-hash=795bc78444
Annotations:      k8s.ovn.org/pod-networks:
                    {"default":{"ip_addresses":["10.130.7.199/23"],"mac_address":"0a:58:0a:82:07:c7","gateway_ips":["10.130.6.1"],"ip_address":"10.130.7.199/2...
                  k8s.v1.cni.cncf.io/network-status:
                    [{
                        "name": "ovn-kubernetes",
                        "interface": "eth0",
                        "ips": [
                            "10.130.7.199"
                        ],
                        "mac": "0a:58:0a:82:07:c7",
                        "default": true,
                        "dns": {}
                    }]
                  k8s.v1.cni.cncf.io/networks-status:
                    [{
                        "name": "ovn-kubernetes",
                        "interface": "eth0",
                        "ips": [
                            "10.130.7.199"
                        ],
                        "mac": "0a:58:0a:82:07:c7",
                        "default": true,
                        "dns": {}
                    }]
                  openshift.io/scc: privileged
Status:           Running
IP:               10.130.7.199
IPs:
  IP:           10.130.7.199
Controlled By:  ReplicaSet/testkube-mongodb-795bc78444
Containers:
  mongodb:
    Container ID:   cri-o://5090065df0a53a1b1ec16c8fa517984e693a8aaeb62e57472e3283315b0df6b6
    Image:          docker.io/zcube/bitnami-compat-mongodb:5.0.10-debian-11-r19
    Image ID:       docker.io/zcube/bitnami-compat-mongodb@sha256:89dbcb3f907cbbfe751e02b7380edba15043ca660543ccc5bc0ffbf07415b129
    Port:           27017/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 15 Mar 2023 20:23:13 +0100
      Finished:     Wed, 15 Mar 2023 20:23:17 +0100
    Ready:          False
    Restart Count:  2335
    Requests:
      cpu:      150m
      memory:   100Mi
    Liveness:   exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
    Readiness:  exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:                    true
      ALLOW_EMPTY_PASSWORD:             yes
      MONGODB_SYSTEM_LOG_VERBOSITY:     0
      MONGODB_DISABLE_SYSTEM_LOG:       no
      MONGODB_DISABLE_JAVASCRIPT:       no
      MONGODB_ENABLE_JOURNAL:           yes
      MONGODB_PORT_NUMBER:              27017
      MONGODB_ENABLE_IPV6:              no
      MONGODB_ENABLE_DIRECTORY_PER_DB:  no
    Mounts:
      /bitnami/mongodb from datadir (rw)
      /bitnami/scripts from common-scripts (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n5j97 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  common-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      testkube-mongodb-common-scripts
    Optional:  false
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  testkube-mongodb
    ReadOnly:   false
  kube-api-access-n5j97:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 kubernetes.io/arch=arm64:NoSchedule
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                     From     Message
  ----     ------   ----                    ----     -------
  Normal   Pulled   89m (x2319 over 8d)     kubelet  Container image "docker.io/zcube/bitnami-compat-mongodb:5.0.10-debian-11-r19" already present on machine
  Warning  BackOff  4m22s (x57731 over 8d)  kubelet  Back-off restarting failed container
pk@X:~/Dokumenter/OpenShift/ad-config$ oc -n testkube logs testkube-mongodb-795bc78444-f89q5
mongodb 19:23:13.91
mongodb 19:23:13.91 Welcome to the Bitnami mongodb container
mongodb 19:23:13.91 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 19:23:13.91 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 19:23:13.91
mongodb 19:23:13.92 INFO  ==> ** Starting MongoDB setup **
mongodb 19:23:13.93 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb 19:23:15.67 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mongodb 19:23:16.59 INFO  ==> Initializing MongoDB...
mongodb 19:23:17.21 INFO  ==> Deploying MongoDB from scratch...
mongodb 19:23:17.22 DEBUG ==> Starting MongoDB in background...
Error opening config file: Permission denied
try '/opt/bitnami/mongodb/bin/mongod --help' for more information
upr-kmd commented 1 year ago

FYI: this is a standalone OpenShift installation on Azure VMs, not ARO (Azure Red Hat OpenShift, supported by Microsoft).

ypoplavs commented 1 year ago

I see. MongoDB is an external chart that is used by Testkube as a sub-chart. Let's create an issue in bitnami charts repo, specifying all the errors so that the team can advise on installation.

TheBrunoLopes commented 1 year ago

Hello @upr-kmd and @ypoplavs is this issue still happening ?

ypoplavs commented 1 year ago

hi @upr-kmd! did you get any feedback from the bitnami team on the mongo installation?

windowsrefund commented 4 months ago

Why was this marked as completed?