bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.05k stars 9.22k forks source link

[bitnami/harbor] - Core pod not starting #30230

Open Amith211 opened 2 weeks ago

Amith211 commented 2 weeks ago

Name and Version

bitnami/harbor 24.0.1

What architecture are you using?

amd64

What steps will reproduce the bug?

Fresh install via flux.

I had a previous version working and ran into this error when updating. I decided to do a fresh install (delete old namespace and install into a new namespace), but I am still getting this same error.

Are you using any custom parameters or values?

    adminPassword: ${HARBOR_ADMIN_PW}
    externalURL: https://registry.${DOMAIN_1}
    exposureType: ingress

    ## Service parameters
    ##
    service:
      ## @param service.type NGINX proxy service type
      ##
      type: ClusterIP
      ## @param service.sessionAffinity Control where client requests go, to the same pod or round-robin
      ## Values: ClientIP or None
      ## ref: https://kubernetes.io/docs/concepts/services-networking/service/
      ##
      sessionAffinity: None
      ## @param service.sessionAffinityConfig Additional settings for the sessionAffinity
      ## sessionAffinityConfig:
      ##   clientIP:
      ##     timeoutSeconds: 300
      ##
      #sessionAffinityConfig: {}
      #externalTrafficPolicy: Cluster
    ingress:
      ## Configure the ingress resource that allows you to access Harbor Core
      ## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
      ##
      core:
        ## @param ingress.core.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)
        ## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster .
        ## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
        ##
        ingressClassName: "nginx-public"
        ## @param ingress.core.pathType Ingress path type
        ##
        #pathType: ImplementationSpecific
        ## @param ingress.core.apiVersion Force Ingress API version (automatically detected if not set)
        ##
        #apiVersion: ""
        ## @param ingress.core.hostname Default host for the ingress record
        ##
        hostname: registry.${DOMAIN_1}
        ## @param ingress.core.annotations [object] Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
        ## Use this parameter to set the required annotations for cert-manager, see
        ## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
        ## e.g:
        ## annotations:
        ##   kubernetes.io/ingress.class: nginx
        ##   cert-manager.io/cluster-issuer: cluster-issuer-name
        ##
        annotations:
          cert-manager.io/cluster-issuer: letsencrypt-prod     # use letsencrypt-prod as the cluster issuer for TLS certs
          ingress.kubernetes.io/force-ssl-redirect: "true"     # force https, even if http is requested
          kubernetes.io/ingress.class: nginx-public            # using nginx for ingress
          kubernetes.io/tls-acme: "true"         
          ingress.kubernetes.io/ssl-redirect: "true"
          ingress.kubernetes.io/proxy-body-size: "0"
          nginx.ingress.kubernetes.io/ssl-redirect: "true"
          nginx.ingress.kubernetes.io/proxy-body-size: "0"
        ## @param ingress.core.tls Enable TLS configuration for the host defined at `ingress.core.hostname` parameter
        ## TLS certificates will be retrieved from a TLS secret with name: `{{- printf "%s-tls" .Values.ingress.core.hostname }}`
        ## You can:
        ##   - Use the `ingress.core.secrets` parameter to create this TLS secret
        ##   - Rely on cert-manager to create it by setting the corresponding annotations
        ##   - Rely on Helm to create self-signed certificates by setting `ingress.core.selfSigned=true`
        ##
        tls: true
        ## @param ingress.core.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm
        ##
        selfSigned: false
        ## @param ingress.core.extraHosts An array with additional hostname(s) to be covered with the ingress record
        ## e.g:
        ## extraHosts:
        ##   - name: core.harbor.domain
        ##     path: /
        ##
        #extraHosts: []
        ## @param ingress.core.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host
        ## e.g:
        ## extraPaths:
        ## - path: /*
        ##   backend:
        ##     serviceName: ssl-redirect
        ##     servicePort: use-annotation
        ##
        #extraPaths: []
        ## @param ingress.core.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record
        ## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
        ## e.g:
        ## extraTls:
        ## - hosts:
        ##     - core.harbor.domain
        ##   secretName: core.harbor.domain-tls
        ##
        #extraTls: []
    ## @section Persistence Parameters
    ##

    ## The persistence is enabled by default and a default StorageClass
    ## is needed in the k8s cluster to provision volumes dynamically.
    ## Specify another StorageClass in the "storageClass" or set "existingClaim"
    ## if you have already existing persistent volumes to use
    ##
    ## For storing images and charts, you can also use "azure", "gcs", "s3",
    ## "swift" or "oss". Set it in the "imageChartStorage" section
    ##
    persistence:
      ## @param persistence.enabled Enable the data persistence or not
      ##
      enabled: true
      ## Resource Policy
      ## @param persistence.resourcePolicy Setting it to `keep` to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted
      ##
      resourcePolicy: "keep"
      persistentVolumeClaim:
        ## @param persistence.persistentVolumeClaim.registry.existingClaim Name of an existing PVC to use
        ## @param persistence.persistentVolumeClaim.registry.storageClass PVC Storage Class for Harbor Registry data volume
        ## Note: The default StorageClass will be used if not defined. Set it to `-` to disable dynamic provisioning
        ## @param persistence.persistentVolumeClaim.registry.subPath The sub path used in the volume
        ## @param persistence.persistentVolumeClaim.registry.accessModes The access mode of the volume
        ## @param persistence.persistentVolumeClaim.registry.size The size of the volume
        ## @param persistence.persistentVolumeClaim.registry.annotations Annotations for the PVC
        ## @param persistence.persistentVolumeClaim.registry.selector Selector to match an existing Persistent Volume
        ##
        registry:
          #existingClaim: ""
          #accessModes:
          #  - ReadWriteOnce
          size: 20Gi
          #annotations: {}
          #selector: {}
        ## @param persistence.persistentVolumeClaim.jobservice.existingClaim Name of an existing PVC to use
        ## @param persistence.persistentVolumeClaim.jobservice.storageClass PVC Storage Class for Harbor Jobservice data volume
        ## Note: The default StorageClass will be used if not defined. Set it to `-` to disable dynamic provisioning
        ## @param persistence.persistentVolumeClaim.jobservice.subPath The sub path used in the volume
        ## @param persistence.persistentVolumeClaim.jobservice.accessModes The access mode of the volume
        ## @param persistence.persistentVolumeClaim.jobservice.size The size of the volume
        ## @param persistence.persistentVolumeClaim.jobservice.annotations Annotations for the PVC
        ## @param persistence.persistentVolumeClaim.jobservice.selector Selector to match an existing Persistent Volume
        ##
        jobservice:
          #existingClaim: ""
          #accessModes:
          #  - ReadWriteOnce
          size: 10Gi
        ## @param persistence.persistentVolumeClaim.trivy.storageClass PVC Storage Class for Trivy data volume
        ## Note: The default StorageClass will be used if not defined. Set it to `-` to disable dynamic provisioning
        ## @param persistence.persistentVolumeClaim.trivy.accessModes The access mode of the volume
        ## @param persistence.persistentVolumeClaim.trivy.size The size of the volume
        ## @param persistence.persistentVolumeClaim.trivy.annotations Annotations for the PVC
        ## @param persistence.persistentVolumeClaim.trivy.selector Selector to match an existing Persistent Volume
        ##
        trivy:
          #accessModes:
          #  - ReadWriteOnce
          size: 10Gi
          #annotations: {}
          #selector: {}
      ## Define which storage backend is used for registry to store
      ## images and charts.
      ## ref: https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
      ##
      imageChartStorage:
        ## @param persistence.imageChartStorage.caBundleSecret Specify the `caBundleSecret` if the storage service uses a self-signed certificate. The secret must contain keys named `ca.crt` which will be injected into the trust store  of registry's containers.
        ##
        #caBundleSecret: ""
        ## @param persistence.imageChartStorage.disableredirect The configuration for managing redirects from content backends. For backends which do not supported it (such as using MinIO® for `s3` storage type), please set it to `true` to disable redirects. Refer to the [guide](https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect) for more information about the detail
        ##
        disableredirect: false
        ## @param persistence.imageChartStorage.type The type of storage for images and charts: `filesystem`, `azure`, `gcs`, `s3`, `swift` or `oss`. The type must be `filesystem` if you want to use persistent volumes for registry. Refer to the [guide](https://github.com/docker/distribution/blob/master/docs/configuration.md#storage) for more information about the detail
        ##
        #type: filesystem
        ## Images/charts storage parameters when type is "filesystem"
        ## @param persistence.imageChartStorage.filesystem.rootdirectory Filesystem storage type setting: Storage root directory
        ## @param persistence.imageChartStorage.filesystem.maxthreads Filesystem storage type setting: Maximum threads directory
        ##
        # filesystem:
        #   rootdirectory: /storage
        #   maxthreads: ""
    volumePermissions:
      ## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume
      ##
      enabled: true
      resourcesPreset: "nano"
      ## @param volumePermissions.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
      ## Example:
      ## resources:
      ##   requests:
      ##     cpu: 2
      ##     memory: 512Mi
      ##   limits:
      ##     cpu: 3
      ##     memory: 1024Mi
      ##
      #resources: {}

    portal:
      ## Bitnami Harbor Portal image
      ## ref: https://hub.docker.com/r/bitnami/harbor-portal/tags/
      ## @param portal.image.registry [default: REGISTRY_NAME] Harbor Portal image registry
      ## @param portal.image.repository [default: REPOSITORY_NAME/harbor-portal] Harbor Portal image repository
      ## @skip portal.image.tag Harbor Portal image tag (immutable tags are recommended)
      ## @param portal.image.digest Harbor Portal image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
      ## @param portal.image.pullPolicy Harbor Portal image pull policy
      ## @param portal.image.pullSecrets Harbor Portal image pull secrets
      ## @param portal.image.debug Enable Harbor Portal image debug mode
      ##
      image:
        ## Specify a imagePullPolicy
        ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
        ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
        ##
        pullPolicy: IfNotPresent
        #debug: true
      ## @param portal.replicaCount Number of Harbor Portal replicas
      ##
      #replicaCount: 1
      resourcesPreset: "small"
      ## @param portal.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
      ## Example:
      ## resources:
      ##   requests:
      ##     cpu: 2
      ##     memory: 512Mi
      ##   limits:
      ##     cpu: 3
      ##     memory: 1024Mi
      ##
      #resources: {}

      ## @param portal.updateStrategy.type Harbor Portal deployment strategy type - only really applicable for deployments with RWO PVs attached
      ## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
      ## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
      ## terminate the single previous pod, so that the new, incoming pod can attach to the PV
      ##
      #updateStrategy:
      #  type: RollingUpdate

    ## @section Harbor Core Parameters
    ##
    core:
      ## Bitnami Harbor Core image
      ## ref: https://hub.docker.com/r/bitnami/harbor-core/tags/
      ## @param core.image.registry [default: REGISTRY_NAME] Harbor Core image registry
      ## @param core.image.repository [default: REPOSITORY_NAME/harbor-core] Harbor Core image repository
      ## @skip core.image.tag Harbor Core image tag (immutable tags are recommended)
      ## @param core.image.digest Harbor Core image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
      ## @param core.image.pullPolicy Harbor Core image pull policy
      ## @param core.image.pullSecrets Harbor Core image pull secrets
      ## @param core.image.debug Enable Harbor Core image debug mode
      ##
      image:
        ## Specify a imagePullPolicy
        ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
        ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
        ##
        pullPolicy: IfNotPresent
        ## Optionally specify an array of imagePullSecrets.
        ## Secrets must be manually created in the namespace.
        ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
        ## e.g:
        ## Enable debug mode
        ##
        #debug: true
      ## @param core.sessionLifetime Explicitly set a session timeout (in seconds) overriding the backend default.
      ##
      #sessionLifetime: ""
      ## @param core.uaaSecret If using external UAA auth which has a self signed cert, you can provide a pre-created secret containing it under the key `ca.crt`.
      ##
      #uaaSecret: ""
      ## @param core.secretKey The key used for encryption. Must be a string of 16 chars
      ## e.g:
      ## secretKey: "not-a-secure-string"
      ##
      #secretKey: ""
      ## @param core.secret Secret used when the core server communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars.
      ##
      #secret: ""
      ## @param core.tokenKey Key of the certificate used for token encryption/decryption.
      ##
      #tokenKey: ""
      ## @param core.tokenCert Certificate used for token encryption/decryption.
      ##
      #tokenCert: ""
      ## @param core.secretName Fill the name of a kubernetes secret if you want to use your own TLS certificate and private key for token encryption/decryption. The secret must contain two keys named: `tls.crt` - the certificate and `tls.key` - the private key. The default key pair will be used if it isn't set
      ##
      #secretName: ""
      ## @param core.existingSecret Existing secret for core
      ## The secret must contain the keys:
      ## `secret` (required),
      ## `secretKey` (required),
      ##
      #existingSecret: ""
      ## @param core.existingEnvVarsSecret Existing secret for core envvars
      ## The secret must contain the keys:
      ## `CSRF_KEY` (optional - alternatively auto-generated),
      ## `HARBOR_ADMIN_PASSWORD` (optional - alternatively auto-generated),
      ## `POSTGRESQL_PASSWORD` (optional - alternatively uses weak upstream default. Read below if you set it),
      ## `postgres-password` (required if POSTGRESQL_PASSWORD is set & must be the same as POSTGRESQL_PASSWORD.)
      ## `HARBOR_DATABASE_PASSWORD` (required if POSTGRESQL_PASSWORD is set & must be the same as POSTGRESQL_PASSWORD.)
      ## `REGISTRY_CREDENTIAL_USERNAME` (optional - alternatively weak defaults),
      ## `REGISTRY_CREDENTIAL_PASSWORD` (optional - alternatively weak defaults),
      ## `_REDIS_URL_CORE` (required - if using the internal Redis - set to base64 of "redis://harbor-redis-master:6379/0")
      ## `_REDIS_URL_REG` (required - if using the internal Redis - set to base64 of "redis://harbor-redis-master:6379/2")
      ##
      ## If you do not know how to start, let the chart generate a full secret for you before defining an existingEnvVarsSecret
      ## Notes:
      ##   As a EnvVars secret, this secret also store redis config urls
      ##   The HARBOR_ADMIN_PASSWORD is only required at initial deployment, once the password is set in database, it is not used anymore
      ##
      #existingEnvVarsSecret: ""
      ## @param core.csrfKey The CSRF key. Will be generated automatically if it isn't specified
      ##
      #csrfKey: ""
      ## Harbor Core resource requests and limits
      ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
      ## @param core.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if core.resources is set (core.resources is recommended for production).
      ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
      ##
      resourcesPreset: "small"
      ## @param core.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
      ## Example:
      ## resources:
      ##   requests:
      ##     cpu: 2
      ##     memory: 512Mi
      ##   limits:
      ##     cpu: 3
      ##     memory: 1024Mi
      ##
      #resources: {}

      ## @param core.updateStrategy.type Harbor Core deployment strategy type - only really applicable for deployments with RWO PVs attached
      ## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
      ## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
      ## terminate the single previous pod, so that the new, incoming pod can attach to the PV
      ##
      #updateStrategy:
      #  type: RollingUpdate

    ## @section Harbor Jobservice Parameters
    ##
    jobservice:
      ## Bitnami Harbor Jobservice image
      ## ref: https://hub.docker.com/r/bitnami/harbor-jobservice/tags/
      ## @param jobservice.image.registry [default: REGISTRY_NAME] Harbor Jobservice image registry
      ## @param jobservice.image.repository [default: REPOSITORY_NAME/harbor-jobservice] Harbor Jobservice image repository
      ## @skip jobservice.image.tag Harbor Jobservice image tag (immutable tags are recommended)
      ## @param jobservice.image.digest Harbor Jobservice image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
      ## @param jobservice.image.pullPolicy Harbor Jobservice image pull policy
      ## @param jobservice.image.pullSecrets Harbor Jobservice image pull secrets
      ## @param jobservice.image.debug Enable Harbor Jobservice image debug mode
      ##
      image:
        ## Specify a imagePullPolicy
        ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
        ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
        ##
        pullPolicy: IfNotPresent
        ## Enable debug mode
        ##
        #debug: false
      ## @param jobservice.maxJobWorkers The max job workers
      ##
      #maxJobWorkers: 10
      ## @param jobservice.redisNamespace Redis namespace for jobservice
      ##
      #redisNamespace: harbor_job_service_namespace
      ## @param jobservice.jobLogger The logger for jobs: `file`, `database` or `stdout`
      ##
      #jobLogger: file
      ## @param jobservice.secret Secret used when the job service communicates with other components. If a secret key is not specified, Helm will generate one. Must be a string of 16 chars.
      ## If a secret key is not specified, Helm will generate one.
      ## Must be a string of 16 chars.
      ##
      #secret: ""
      ## @param jobservice.existingSecret Existing secret for jobservice
      ## The secret must contain the keys:
      ## `secret` (required),
      ##
      #existingSecret: ""
      ## @param jobservice.existingEnvVarsSecret Existing secret for jobservice envvars
      ## The secret must contain the keys:
      ## `REGISTRY_CREDENTIAL_PASSWORD` (optional),
      ## `JOB_SERVICE_POOL_REDIS_URL` (required - if using the internal Redis - set to base64 of "redis://harbor-redis-master:6379/1"),
      ##
      ## If you do not know how to start, let the chart generate a full secret for you before defining an existingEnvVarsSecret
      #existingEnvVarsSecret: ""

      ## @param jobservice.replicaCount Number of Harbor Jobservice replicas
      ##
      #replicaCount: 1

      ## Harbor Jobservice resource requests and limits
      ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
      ## @param jobservice.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if jobservice.resources is set (jobservice.resources is recommended for production).
      ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
      ##
      resourcesPreset: "small"
      ## @param jobservice.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
      ## Example:
      ## resources:
      ##   requests:
      ##     cpu: 2
      ##     memory: 512Mi
      ##   limits:
      ##     cpu: 3
      ##     memory: 1024Mi
      ##
      #resources: {}

      ## @param jobservice.updateStrategy.type Harbor Jobservice deployment strategy type - only really applicable for deployments with RWO PVs attached
      ## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
      ## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
      ## terminate the single previous pod, so that the new, incoming pod can attach to the PV
      ##
      updateStrategy:
        type: RollingUpdate

    ## @section Harbor Registry Parameters
    ##

    ## Registry Parameters
    ##
    registry:
      relativeurls: false
      credentials:
        htpasswd: 'harbor_registry_user:${passwd}'
      ## @param registry.updateStrategy.type Harbor Registry deployment strategy type - only really applicable for deployments with RWO PVs attached
      ## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
      ## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
      ## terminate the single previous pod, so that the new, incoming pod can attach to the PV
      ##

      server:
        ## Bitnami Harbor Registry image
        ## ref: https://hub.docker.com/r/bitnami/harbor-registry/tags/
        ## @param registry.server.image.registry [default: REGISTRY_NAME] Harbor Registry image registry
        ## @param registry.server.image.repository [default: REPOSITORY_NAME/harbor-registry] Harbor Registry image repository
        ## @skip registry.server.image.tag Harbor Registry image tag (immutable tags are recommended)
        ## @param registry.server.image.digest Harbor Registry image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
        ## @param registry.server.image.pullPolicy Harbor Registry image pull policy
        ## @param registry.server.image.pullSecrets Harbor Registry image pull secrets
        ## @param registry.server.image.debug Enable Harbor Registry image debug mode
        ##
        image:
          ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
          ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
          ##
          pullPolicy: IfNotPresent
          ## Enable debug mode
          ##
          debug: false
        ## Harbor Registry main resource requests and limits
        ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
        ## @param registry.server.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if registry.server.resources is set (registry.server.resources is recommended for production).
        ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
        ##
        resourcesPreset: "small"
        ## @param registry.server.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
        ## Example:
        ## resources:
        ##   requests:
        ##     cpu: 2
        ##     memory: 512Mi
        ##   limits:
        ##     cpu: 3
        ##     memory: 1024Mi
        ##
        #resources: {}

      ## Harbor Registryctl parameters
      ##
      controller:
        ## Bitnami Harbor Registryctl image
        ## ref: https://hub.docker.com/r/bitnami/harbor-registryctl/tags/
        ## @param registry.controller.image.registry [default: REGISTRY_NAME] Harbor Registryctl image registry
        ## @param registry.controller.image.repository [default: REPOSITORY_NAME/harbor-registryctl] Harbor Registryctl image repository
        ## @skip registry.controller.image.tag Harbor Registryctl image tag (immutable tags are recommended)
        ## @param registry.controller.image.digest Harbor Registryctl image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
        ## @param registry.controller.image.pullPolicy Harbor Registryctl image pull policy
        ## @param registry.controller.image.pullSecrets Harbor Registryctl image pull secrets
        ## @param registry.controller.image.debug Enable Harbor Registryctl image debug mode
        ##
        image:
          ## Specify a imagePullPolicy
          ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
          ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
          ##
          pullPolicy: IfNotPresent
          ## Enable debug mode
          ##
          #debug: false
        ## Harbor Registryctl resource requests and limits
        ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
        ## @param registry.controller.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if registry.controller.resources is set (registry.controller.resources is recommended for production).
        ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
        ##
        resourcesPreset: "small"
        ## @param registry.controller.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
        ## Example:
        ## resources:
        ##   requests:
        ##     cpu: 2
        ##     memory: 512Mi
        ##   limits:
        ##     cpu: 3
        ##     memory: 1024Mi
        ##
        #resources: {}

    ## @section Harbor Adapter Trivy Parameters
    ##
    trivy:
      ## Bitnami Harbor Adapter Trivy image
      ## ref: https://hub.docker.com/r/bitnami/harbor-adapter-trivy/tags/
      ## @param trivy.image.registry [default: REGISTRY_NAME] Harbor Adapter Trivy image registry
      ## @param trivy.image.repository [default: REPOSITORY_NAME/harbor-adapter-trivy] Harbor Adapter Trivy image repository
      ## @skip trivy.image.tag Harbor Adapter Trivy image tag (immutable tags are recommended)
      ## @param trivy.image.digest Harbor Adapter Trivy image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
      ## @param trivy.image.pullPolicy Harbor Adapter Trivy image pull policy
      ## @param trivy.image.pullSecrets Harbor Adapter Trivy image pull secrets
      ## @param trivy.image.debug Enable Harbor Adapter Trivy image debug mode
      ##
      image:
        ## Specify a imagePullPolicy
        ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
        ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
        ##
        pullPolicy: IfNotPresent
        ## Enable debug mode
        ##
        #debug: false
      ## @param trivy.enabled Enable Trivy
      ##
      enabled: true
      ## @param trivy.debugMode The flag to enable Trivy debug mode
      ##
      #debugMode: false
      ## @param trivy.vulnType Comma-separated list of vulnerability types. Possible values `os` and `library`.
      ##
      vulnType: "os,library"
      ## @param trivy.severity Comma-separated list of severities to be checked
      ##
      severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
      ## @param trivy.ignoreUnfixed The flag to display only fixed vulnerabilities
      ##
      ignoreUnfixed: true
      ## @param trivy.insecure The flag to skip verifying registry certificate
      ##
      #insecure: false
      ## @param trivy.existingEnvVarsSecret Existing secret for trivy
      ## The secret must contain the keys:
      ## `SCANNER_TRIVY_GITHUB_TOKEN` (optional)
      ## `SCANNER_REDIS_URL` (required - if using the internal Redis - set to base64 of "redis://harbor-redis-master:6379/5")
      ## `SCANNER_STORE_REDIS_URL` (required - if using the internal Redis - set to base64 of "redis://harbor-redis-master:6379/5")
      ## `SCANNER_JOB_QUEUE_REDIS_URL` (required - if using the internal Redis - set to base64 of "redis://harbor-redis-master:6379/5")
      ##
      #existingEnvVarsSecret: ""
      ## @param trivy.gitHubToken The GitHub access token to download Trivy DB
      ##
      ## Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
      ## It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
      ## in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update
      ## timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.
      ## Currently, the database is updated every 12 hours and published as a new release to GitHub.
      ##
      ## Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
      ## for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
      ## requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
      ## https://developer.github.com/v3/#rate-limiting
      ##
      ## You can create a GitHub token by following the instructions in
      ## https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
      ##
      #gitHubToken: ""
      ## @param trivy.skipUpdate The flag to disable Trivy DB downloads from GitHub
      ## You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
      ## If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
      ## `/home/scanner/.cache/trivy/db/trivy.db` path.
      ##
      #skipUpdate: false
      ## @param trivy.cacheDir Directory to store the cache
      ##
      #cacheDir: "/bitnami/harbor-adapter-trivy/.cache"

      ## @param trivy.replicaCount Number of Trivy replicas
      ##
      #replicaCount: 1

      ## Trivy resource requests and limits
      ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
      ## @param trivy.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if trivy.resources is set (trivy.resources is recommended for production).
      ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
      ##
      resourcesPreset: "small"
      ## @param trivy.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
      ## Example:
      ## resources:
      ##   requests:
      ##     cpu: 2
      ##     memory: 512Mi
      ##   limits:
      ##     cpu: 3
      ##     memory: 1024Mi
      ##
      #resources: {}
      ## Configure Trivy pods Security Context
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
      ## @param trivy.podSecurityContext.enabled Enabled Trivy pods' Security Context
      ## @param trivy.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy
      ## @param trivy.podSecurityContext.sysctls Set kernel settings using the sysctl interface
      ## @param trivy.podSecurityContext.supplementalGroups Set filesystem extra groups
      ## @param trivy.podSecurityContext.fsGroup Set Trivy pod's Security Context fsGroup
      ##
      # podSecurityContext:
      #   enabled: true
      #   fsGroupChangePolicy: Always
      #   sysctls: []
      #   supplementalGroups: []
      #   fsGroup: 1001
      ## Configure Trivy containers (only main one) Security Context
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
      ## @param trivy.containerSecurityContext.enabled Enabled containers' Security Context
      ## @param trivy.containerSecurityContext.seLinuxOptions [object,nullable] Set SELinux options in container
      ## @param trivy.containerSecurityContext.runAsUser Set containers' Security Context runAsUser
      ## @param trivy.containerSecurityContext.runAsGroup Set containers' Security Context runAsGroup
      ## @param trivy.containerSecurityContext.runAsNonRoot Set container's Security Context runAsNonRoot
      ## @param trivy.containerSecurityContext.privileged Set container's Security Context privileged
      ## @param trivy.containerSecurityContext.readOnlyRootFilesystem Set container's Security Context readOnlyRootFilesystem
      ## @param trivy.containerSecurityContext.allowPrivilegeEscalation Set container's Security Context allowPrivilegeEscalation
      ## @param trivy.containerSecurityContext.capabilities.drop List of capabilities to be dropped
      ## @param trivy.containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile
      ##
      ## @param trivy.updateStrategy.type Trivy deployment strategy type - only really applicable for deployments with RWO PVs attached
      ## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
      ## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
      ## terminate the single previous pod, so that the new, incoming pod can attach to the PV
      ##
      updateStrategy:
        type: RollingUpdate

    ## @section Harbor Exporter Parameters
    ##
    exporter:
      ## Bitnami Harbor Exporter image
      ## ref: https://hub.docker.com/r/bitnami/harbor-exporter/tags/
      ## @param exporter.image.registry [default: REGISTRY_NAME] Harbor Exporter image registry
      ## @param exporter.image.repository [default: REPOSITORY_NAME/harbor-exporter] Harbor Exporter image repository
      ## @skip exporter.image.tag Harbor Exporter image tag
      ## @param exporter.image.digest Harbor Exporter image image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
      ## @param exporter.image.pullPolicy Harbor exporter image pull policy
      ## @param exporter.image.pullSecrets Specify docker-registry secret names as an array
      ## @param exporter.image.debug Specify if debug logs should be enabled
      ##
      image:
        ## Specify a imagePullPolicy
        ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
        ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
        ##
        pullPolicy: IfNotPresent
        ## Enable debug mode
        ##
        #debug: false
      ## @param exporter.replicaCount The replica count
      ##
      #replicaCount: 1
      ## Harbor Exporter resource requests and limits
      ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
      ## @param exporter.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if exporter.resources is set (exporter.resources is recommended for production).
      ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
      ##
      resourcesPreset: "nano"
      ## @param exporter.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
      ## Example:
      ## resources:
      ##   requests:
      ##     cpu: 2
      ##     memory: 512Mi
      ##   limits:
      ##     cpu: 3
      ##     memory: 1024Mi
      ##
      #resources: {}
      ## @param exporter.updateStrategy.type The update strategy for deployments with persistent volumes: RollingUpdate or Recreate. Set it as Recreate when RWM for volumes isn't supported
      ## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
      ## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
      ## terminate the single previous pod, so that the new, incoming pod can attach to the PV
      ##
      #updateStrategy:
      #  type: RollingUpdate

    ## @section PostgreSQL Parameters
    ##

    ## PostgreSQL chart configuration
    ## ref: https://github.com/bitnami/charts/blob/main/bitnami/postgresql/values.yaml
    ## @param postgresql.enabled Switch to enable or disable the PostgreSQL helm chart
    ## @param postgresql.auth.enablePostgresUser Assign a password to the "postgres" admin user. Otherwise, remote access will be blocked for this user
    ## @param postgresql.auth.postgresPassword Password for the "postgres" admin user
    ## @param postgresql.auth.existingSecret Name of existing secret to use for PostgreSQL credentials
    ## @param postgresql.architecture PostgreSQL architecture (`standalone` or `replication`)
    ## @param postgresql.primary.extendedConfiguration Extended PostgreSQL Primary configuration (appended to main or default configuration)
    ## @param postgresql.primary.initdb.scripts [object] Initdb scripts to create Harbor databases
    ##
    postgresql:
      #enabled: true
      ## Override PostgreSQL default image as 14.x is not supported https://goharbor.io/docs/2.4.0/install-config/
      ## ref: https://github.com/bitnami/containers/tree/main/bitnami/postgresql
      ## @param postgresql.image.registry [default: REGISTRY_NAME] PostgreSQL image registry
      ## @param postgresql.image.repository [default: REPOSITORY_NAME/postgresql] PostgreSQL image repository
      ## @skip postgresql.image.tag PostgreSQL image tag (immutable tags are recommended)
      ## @param postgresql.image.digest PostgreSQL image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
      ##
      #auth:
      #  enablePostgresUser: true
      #  postgresPassword: not-secure-database-password
        #existingSecret: "harbor-postgresql"
      #architecture: standalone
      primary:
        #persistence:
          #existingClaim: ""
        #extendedConfiguration: |
        #  max_connections = 1024
        # initdb:
        #   scripts:
        #     initial-registry.sql: |
        #       CREATE DATABASE registry ENCODING 'UTF8';
        #       \c registry;
        #       CREATE TABLE schema_migrations(version bigint not null primary key, dirty boolean not null);
        ## PostgreSQL Primary resource requests and limits
        ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
        ## @param postgresql.primary.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if primary.resources is set (primary.resources is recommended for production).
        ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
        ##
        resourcesPreset: "small"
        ## @param postgresql.primary.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
        ## Example:
        ## resources:
        ##   requests:
        ##     cpu: 2
        ##     memory: 512Mi
        ##   limits:
        ##     cpu: 3
        ##     memory: 1024Mi
        ##
        #resources: {}

What do you see instead?

Core pod not start with following error:

Error: container create failed: creating `/etc/core/token`: open `etc/core/token`: No such file or directory
carrodher commented 2 weeks ago

Are you removing the volumes? Please note volumes are not deleted by default when uninstalling a Helm chart

Amith211 commented 1 week ago

Pretty sure everything has been removed, leaving no residual objects. I'm using longhorn for storage.

Tried again (which I did before) using the following steps:

carrodher commented 1 week ago

Unfortunately, I'm not able to reproduce the issue. Could you try deploying it with the default values (or just modifying the strictly necessary ones)?