Have dificult to expose URL Harbor in Openshift with route ClusterIP anyone can help me ?
Are you using any custom parameters or values?
# Copyright VMware, Inc.
# SPDX-License-Identifier: APACHE-2.0
## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
##
## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
##
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
storageClass: ""
## Compatibility adaptations for Kubernetes platforms
##
compatibility:
## Compatibility adaptations for Openshift
##
openshift:
## @param global.compatibility.openshift.adaptSecurityContext Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)
##
adaptSecurityContext: force
## @section Common Parameters
##
## @param nameOverride String to partially override common.names.fullname template (will maintain the release name)
##
nameOverride: ""
## @param fullnameOverride String to fully override common.names.fullname template with a string
##
fullnameOverride: ""
## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set)
##
kubeVersion: ""
## @param clusterDomain Kubernetes Cluster Domain
##
clusterDomain: cluster.local
## @param commonAnnotations Annotations to add to all deployed objects
##
commonAnnotations: {}
## @param commonLabels Labels to add to all deployed objects
##
commonLabels: {}
## @param extraDeploy Array of extra objects to deploy with the release (evaluated as a template).
##
extraDeploy: []
## Enable diagnostic mode in the deployment(s)/statefulset(s)
##
diagnosticMode:
## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
##
enabled: false
## @param diagnosticMode.command Command to override all containers in the the deployment(s)/statefulset(s)
##
command:
- sleep
## @param diagnosticMode.args Args to override all containers in the the deployment(s)/statefulset(s)
##
args:
- infinity
## @section Harbor common parameters
##
## @param adminPassword The initial password of Harbor admin. Change it from portal after launching Harbor
##
adminPassword: ""
## @param externalURL The external URL for Harbor Core service
## It is used to
## 1) populate the docker/helm commands showed on portal
##
## Format: protocol://domain[:port]. Usually:
## 1) if "exposureType" is "ingress", the "domain" should be
## the value of "ingress.hostname"
## 2) if "exposureType" is "proxy" and "service.type" is "ClusterIP",
## the "domain" should be the value of "service.clusterIP"
## 3) if "exposureType" is "proxy" and "service.type" is "NodePort",
## the "domain" should be the IP address of k8s node
## 4) if "exposureType" is "proxy" and "service.type" is "LoadBalancer",
## the "domain" should be the LoadBalancer IP
##
externalURL: https://XXXXXXXXXXXX.com
## Note: If Harbor is exposed via Ingress, the NGINX server will not be used
## @param proxy.httpProxy The URL of the HTTP proxy server
## @param proxy.httpsProxy The URL of the HTTPS proxy server
## @param proxy.noProxy The URLs that the proxy settings not apply to
## @param proxy.components The component list that the proxy settings apply to
##
proxy:
httpProxy: "http:/xxxxxxxxxxxxxx.com:8080"
httpsProxy: "http://xxxxxxxxxxxxxx.com:8080"
noProxy: localhost, 127.0.0.0/8, ::1,
components:
- core
- jobservice
- trivy
## @param logLevel The log level used for Harbor services. Allowed values are [ fatal \| error \| warn \| info \| debug \| trace ]
##
logLevel: debug
## TLS settings
## Note: TLS cert files need to provided in each components in advance.
##
internalTLS:
## @param internalTLS.enabled Use TLS in all the supported containers: core, jobservice, portal, registry and trivy
##
enabled: false
## @param internalTLS.caBundleSecret Name of an existing secret with a custom CA that will be injected into the trust store for core, jobservice, registry, trivy components
## The secret must contain the key "ca.crt"
##
caBundleSecret: ""
## IP family parameters
##
ipFamily:
## @param ipFamily.ipv6.enabled Enable listening on IPv6 ([::]) for NGINX-based components (NGINX,portal)
## Note: enabling IPv6 will cause NGINX to crash on start on systems with IPv6 disabled (`ipv6.disable` kernel flag)
##
ipv6:
enabled: true
## @param ipFamily.ipv4.enabled Enable listening on IPv4 for NGINX-based components (NGINX,portal)
##
ipv4:
enabled: true
## @section Traffic Exposure Parameters
##
## @param exposureType The way to expose Harbor. Allowed values are [ ingress \| proxy ]
## Use "proxy" to use a deploy NGINX proxy in front of Harbor services
## Use "ingress" to use an Ingress Controller as proxy
##
exposureType: "proxy"
## Service parameters
##
service:
## @param service.type NGINX proxy service type
##
type: "ClusterIP"
## @param service.ports.http NGINX proxy service HTTP port
## @param service.ports.https NGINX proxy service HTTPS port
##
ports:
http: 80
https: 443
## Node ports to expose
## @param service.nodePorts.http Node port for HTTP
## @param service.nodePorts.https Node port for HTTPS
## NOTE: choose port between <30000-32767>
##
nodePorts:
http: ""
https: ""
## @param service.sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/concepts/services-networking/service/
##
sessionAffinity: None
## @param service.sessionAffinityConfig Additional settings for the sessionAffinity
## sessionAffinityConfig:
## clientIP:
## timeoutSeconds: 300
##
sessionAffinityConfig: {}
## @param service.clusterIP NGINX proxy service Cluster IP
## e.g.:
## clusterIP: None
##
clusterIP: ""
## @param service.loadBalancerIP NGINX proxy service Load Balancer IP
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
##
loadBalancerIP: ""
## @param service.loadBalancerSourceRanges NGINX proxy service Load Balancer sources
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## e.g:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## @param service.externalTrafficPolicy NGINX proxy service external traffic policy
## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
##
externalTrafficPolicy: Cluster
## @param service.annotations Additional custom annotations for NGINX proxy service
##
annotations: {}
## @param service.extraPorts Extra port to expose on NGINX proxy service
##
extraPorts: []
ingress:
## Configure the ingress resource that allows you to access Harbor Core
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
core:
## @param ingress.core.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)
## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster .
## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
##
ingressClassName: ""
## @param ingress.core.pathType Ingress path type
##
pathType: ImplementationSpecific
## @param ingress.core.apiVersion Force Ingress API version (automatically detected if not set)
##
apiVersion: ""
## @param ingress.core.controller The ingress controller type. Currently supports `default`, `gce` and `ncp`
## leave as `default` for most ingress controllers.
## set to `gce` if using the GCE ingress controller
## set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
##
controller: default
## @param ingress.core.hostname Default host for the ingress record
##
hostname: core.harbor.domain
## @param ingress.core.annotations [object] Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
## Use this parameter to set the required annotations for cert-manager, see
## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
## e.g:
## annotations:
## kubernetes.io/ingress.class: nginx
## cert-manager.io/cluster-issuer: cluster-issuer-name
##
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
## @param ingress.core.tls Enable TLS configuration for the host defined at `ingress.core.hostname` parameter
## TLS certificates will be retrieved from a TLS secret with name: `{{- printf "%s-tls" .Values.ingress.core.hostname }}`
## You can:
## - Use the `ingress.core.secrets` parameter to create this TLS secret
## - Rely on cert-manager to create it by setting the corresponding annotations
## - Rely on Helm to create self-signed certificates by setting `ingress.core.selfSigned=true`
##
tls: false
## @param ingress.core.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm
##
selfSigned: false
## @param ingress.core.extraHosts An array with additional hostname(s) to be covered with the ingress record
## e.g:
## extraHosts:
## - name: core.harbor.domain
## path: /
##
extraHosts: []
## @param ingress.core.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host
## e.g:
## extraPaths:
## - path: /*
## backend:
## serviceName: ssl-redirect
## servicePort: use-annotation
##
extraPaths: []
## @param ingress.core.extraTls TLS configuration for additional hostname(s) to be covered with this ingress record
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## e.g:
## extraTls:
## - hosts:
## - core.harbor.domain
## secretName: core.harbor.domain-tls
##
extraTls: []
## @param ingress.core.secrets Custom TLS certificates as secrets
## NOTE: 'key' and 'certificate' are expected in PEM format
## NOTE: 'name' should line up with a 'secretName' set further up
## If it is not set and you're using cert-manager, this is unneeded, as it will create a secret for you with valid certificates
## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created valid for 365 days
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
## e.g:
## secrets:
## - name: core.harbor.domain-tls
## key: |-
## -----BEGIN RSA PRIVATE KEY-----
## ...
## -----END RSA PRIVATE KEY-----
## certificate: |-
## -----BEGIN CERTIFICATE-----
## ...
## -----END CERTIFICATE-----
##
secrets: []
## @param ingress.core.extraRules Additional rules to be covered with this ingress record
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
## e.g:
## extraRules:
## - host: example.local
## http:
## path: /
## backend:
## service:
## name: example-svc
## port:
## name: http
##
extraRules: []
##
## @section Persistence Parameters
##
## The persistence is enabled by default and a default StorageClass
## is needed in the k8s cluster to provision volumes dynamically.
## Specify another StorageClass in the "storageClass" or set "existingClaim"
## if you have already existing persistent volumes to use
##
## For storing images and charts, you can also use "azure", "gcs", "s3",
## "swift" or "oss". Set it in the "imageChartStorage" section
##
persistence:
## @param persistence.enabled Enable the data persistence or not
##
enabled: true
## Resource Policy
## @param persistence.resourcePolicy Setting it to `keep` to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted
##
resourcePolicy: "keep"
persistentVolumeClaim:
## @param persistence.persistentVolumeClaim.registry.existingClaim Name of an existing PVC to use
## @param persistence.persistentVolumeClaim.registry.storageClass PVC Storage Class for Harbor Registry data volume
## Note: The default StorageClass will be used if not defined. Set it to `-` to disable dynamic provisioning
## @param persistence.persistentVolumeClaim.registry.subPath The sub path used in the volume
## @param persistence.persistentVolumeClaim.registry.accessModes The access mode of the volume
## @param persistence.persistentVolumeClaim.registry.size The size of the volume
## @param persistence.persistentVolumeClaim.registry.annotations Annotations for the PVC
## @param persistence.persistentVolumeClaim.registry.selector Selector to match an existing Persistent Volume
##
registry:
existingClaim: ""
storageClass: ""
subPath: ""
accessModes:
- ReadWriteOnce
size: 5Gi
annotations: {}
selector: {}
## @param persistence.persistentVolumeClaim.jobservice.existingClaim Name of an existing PVC to use
## @param persistence.persistentVolumeClaim.jobservice.storageClass PVC Storage Class for Harbor Jobservice data volume
## Note: The default StorageClass will be used if not defined. Set it to `-` to disable dynamic provisioning
## @param persistence.persistentVolumeClaim.jobservice.subPath The sub path used in the volume
## @param persistence.persistentVolumeClaim.jobservice.accessModes The access mode of the volume
## @param persistence.persistentVolumeClaim.jobservice.size The size of the volume
## @param persistence.persistentVolumeClaim.jobservice.annotations Annotations for the PVC
## @param persistence.persistentVolumeClaim.jobservice.selector Selector to match an existing Persistent Volume
##
jobservice:
existingClaim: ""
storageClass: ""
subPath: ""
accessModes:
- ReadWriteOnce
size: 1Gi
annotations: {}
selector: {}
## @param persistence.persistentVolumeClaim.trivy.storageClass PVC Storage Class for Trivy data volume
## Note: The default StorageClass will be used if not defined. Set it to `-` to disable dynamic provisioning
## @param persistence.persistentVolumeClaim.trivy.accessModes The access mode of the volume
## @param persistence.persistentVolumeClaim.trivy.size The size of the volume
## @param persistence.persistentVolumeClaim.trivy.annotations Annotations for the PVC
## @param persistence.persistentVolumeClaim.trivy.selector Selector to match an existing Persistent Volume
##
trivy:
storageClass: ""
accessModes:
- ReadWriteOnce
size: 5Gi
annotations: {}
selector: {}
## Define which storage backend is used for registry to store
## images and charts.
## ref: https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
##
imageChartStorage:
## @param persistence.imageChartStorage.caBundleSecret Specify the `caBundleSecret` if the storage service uses a self-signed certificate. The secret must contain keys named `ca.crt` which will be injected into the trust store of registry's containers.
##
caBundleSecret: ""
## @param persistence.imageChartStorage.disableredirect The configuration for managing redirects from content backends. For backends which do not supported it (such as using MinIO® for `s3` storage type), please set it to `true` to disable redirects. Refer to the [guide](https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect) for more information about the detail
##
disableredirect: false
## @param persistence.imageChartStorage.type The type of storage for images and charts: `filesystem`, `azure`, `gcs`, `s3`, `swift` or `oss`. The type must be `filesystem` if you want to use persistent volumes for registry. Refer to the [guide](https://github.com/docker/distribution/blob/master/docs/configuration.md#storage) for more information about the detail
##
type: filesystem
## Images/charts storage parameters when type is "filesystem"
## @param persistence.imageChartStorage.filesystem.rootdirectory Filesystem storage type setting: Storage root directory
## @param persistence.imageChartStorage.filesystem.maxthreads Filesystem storage type setting: Maximum threads directory
##
filesystem:
rootdirectory: /storage
maxthreads: ""
## Images/charts storage parameters when type is "azure"
## @param persistence.imageChartStorage.azure.accountname Azure storage type setting: Name of the Azure account
## @param persistence.imageChartStorage.azure.accountkey Azure storage type setting: Key of the Azure account
## @param persistence.imageChartStorage.azure.container Azure storage type setting: Container
## @param persistence.imageChartStorage.azure.storagePrefix Azure storage type setting: Storage prefix
## @param persistence.imageChartStorage.azure.realm Azure storage type setting: Realm of the Azure account
##
azure:
accountname: accountname
accountkey: base64encodedaccountkey
container: containername
storagePrefix: /azure/harbor/charts
## Example realm
## realm: core.windows.net
##
realm: ""
## Images/charts storage parameters when type is "gcs"
## @param persistence.imageChartStorage.gcs.bucket GCS storage type setting: Bucket name
## @param persistence.imageChartStorage.gcs.encodedkey GCS storage type setting: Base64 encoded key
## @param persistence.imageChartStorage.gcs.rootdirectory GCS storage type setting: Root directory name
## @param persistence.imageChartStorage.gcs.chunksize GCS storage type setting: Chunk size name
##
gcs:
bucket: bucketname
## The base64 encoded json file which contains the gcs key (file's content)
##
encodedkey: ""
rootdirectory: ""
chunksize: ""
## Images/charts storage parameters when type is "s3"
## ref: https://docs.docker.com/registry/storage-drivers/s3/
## @param persistence.imageChartStorage.s3.region S3 storage type setting: Region
## @param persistence.imageChartStorage.s3.bucket S3 storage type setting: Bucket name
## @param persistence.imageChartStorage.s3.accesskey S3 storage type setting: Access key name
## @param persistence.imageChartStorage.s3.secretkey S3 storage type setting: Secret Key name
## @param persistence.imageChartStorage.s3.regionendpoint S3 storage type setting: Region Endpoint
## @param persistence.imageChartStorage.s3.encrypt S3 storage type setting: Encrypt
## @param persistence.imageChartStorage.s3.keyid S3 storage type setting: Key ID
## @param persistence.imageChartStorage.s3.secure S3 storage type setting: Secure
## @param persistence.imageChartStorage.s3.skipverify S3 storage type setting: TLS skip verification
## @param persistence.imageChartStorage.s3.v4auth S3 storage type setting: V4 authorization
## @param persistence.imageChartStorage.s3.chunksize S3 storage type setting: V4 authorization
## @param persistence.imageChartStorage.s3.rootdirectory S3 storage type setting: Root directory name
## @param persistence.imageChartStorage.s3.storageClass S3 storage type setting: Storage class
## @param persistence.imageChartStorage.s3.sse S3 storage type setting: SSE name
##
s3:
region: us-west-1
bucket: nova-harbor
accesskey: "DG66OL01FIOJFYJF3BWB"
secretkey: "34GZZ6PPmRzPZ3ClZUy79VqAwJ7x6NXLxx5RN309"
regionendpoint: "https://s3.mousquetaires.com"
encrypt: ""
keyid: ""
secure: ""
skipverify: ""
v4auth: ""
chunksize: ""
rootdirectory: "ocp4-sandbox"
storageClass: ""
sse: ""
## Images/charts storage parameters when type is "swift"
## @param persistence.imageChartStorage.swift.authurl Swift storage type setting: Authentication url
## @param persistence.imageChartStorage.swift.username Swift storage type setting: Authentication url
## @param persistence.imageChartStorage.swift.password Swift storage type setting: Password
## @param persistence.imageChartStorage.swift.container Swift storage type setting: Container
## @param persistence.imageChartStorage.swift.region Swift storage type setting: Region
## @param persistence.imageChartStorage.swift.tenant Swift storage type setting: Tenant
## @param persistence.imageChartStorage.swift.tenantid Swift storage type setting: TenantID
## @param persistence.imageChartStorage.swift.domain Swift storage type setting: Domain
## @param persistence.imageChartStorage.swift.domainid Swift storage type setting: DomainID
## @param persistence.imageChartStorage.swift.trustid Swift storage type setting: TrustID
## @param persistence.imageChartStorage.swift.insecureskipverify Swift storage type setting: Verification
## @param persistence.imageChartStorage.swift.chunksize Swift storage type setting: Chunk
## @param persistence.imageChartStorage.swift.prefix Swift storage type setting: Prefix
## @param persistence.imageChartStorage.swift.secretkey Swift storage type setting: Secre Key
## @param persistence.imageChartStorage.swift.accesskey Swift storage type setting: Access Key
## @param persistence.imageChartStorage.swift.authversion Swift storage type setting: Auth
## @param persistence.imageChartStorage.swift.endpointtype Swift storage type setting: Endpoint
## @param persistence.imageChartStorage.swift.tempurlcontainerkey Swift storage type setting: Temp URL container key
## @param persistence.imageChartStorage.swift.tempurlmethods Swift storage type setting: Temp URL methods
##
swift:
authurl: https://storage.myprovider.com/v3/auth
username: ""
password: ""
container: ""
region: ""
tenant: ""
tenantid: ""
domain: ""
domainid: ""
trustid: ""
insecureskipverify: ""
chunksize: ""
prefix: ""
secretkey: ""
accesskey: ""
authversion: ""
endpointtype: ""
tempurlcontainerkey: ""
tempurlmethods: ""
## Images/charts storage parameters when type is "oss"
## @param persistence.imageChartStorage.oss.accesskeyid OSS storage type setting: Access key ID
## @param persistence.imageChartStorage.oss.accesskeysecret OSS storage type setting: Access key secret name containing the token
## @param persistence.imageChartStorage.oss.region OSS storage type setting: Region name
## @param persistence.imageChartStorage.oss.bucket OSS storage type setting: Bucket name
## @param persistence.imageChartStorage.oss.endpoint OSS storage type setting: Endpoint
## @param persistence.imageChartStorage.oss.internal OSS storage type setting: Internal
## @param persistence.imageChartStorage.oss.encrypt OSS storage type setting: Encrypt
## @param persistence.imageChartStorage.oss.secure OSS storage type setting: Secure
## @param persistence.imageChartStorage.oss.chunksize OSS storage type setting: Chunk
## @param persistence.imageChartStorage.oss.rootdirectory OSS storage type setting: Directory
## @param persistence.imageChartStorage.oss.secretkey OSS storage type setting: Secret key
##
oss:
accesskeyid: ""
accesskeysecret: ""
region: ""
bucket: ""
endpoint: ""
internal: ""
encrypt: ""
secure: ""
chunksize: ""
rootdirectory: ""
secretkey: ""
## @section Tracing parameters
##
## Tracing parameters:
## tracing: Configure tracing for Harbor, only one of tracing.jeager.enabled and tracing.otel.enabled should be set
##
tracing:
## @param tracing.enabled Enable tracing
##
enabled: false
## @param tracing.sampleRate Tracing sample rate from 0 to 1
##
sampleRate: 1
## @param tracing.namespace Used to differentiate traces between different harbor services
##
namespace: ""
## @param tracing.attributes A key value dict containing user defined attributes used to initialize the trace provider
## e.g:
## attributes:
## application: harbor
##
attributes: {}
## @extra tracing.jaeger Configuration for exporting to jaeger. If using jaeger collector mode, use endpoint, username and password. If using jaeger agent mode, use agentHostname and agentPort.
## e.g:
## jaeger:
## enabled: true
## endpoint: http://hostname:14268/api/traces
## username: "jaeger-username"
## password: "jaeger-password"
## @param tracing.jaeger.enabled Enable jaeger export
## @param tracing.jaeger.endpoint Jaeger endpoint
## @param tracing.jaeger.username Jaeger username
## @param tracing.jaeger.password Jaeger password
## @param tracing.jaeger.agentHost Jaeger agent hostname
## @param tracing.jaeger.agentPort Jaeger agent port
##
jaeger:
enabled: false
endpoint: ""
username: ""
password: ""
agentHost: ""
agentPort: ""
## @extra tracing.otel Configuration for exporting to an otel endpoint
## @param tracing.otel.enabled Enable otel export
## @param tracing.otel.endpoint The hostname and port for an otel compatible backend
## @param tracing.otel.urlpath Url path of otel endpoint
## @param tracing.otel.compression Enable data compression
## @param tracing.otel.timeout The timeout for data transfer
## @param tracing.otel.insecure Ignore cert verification for otel backend
##
otel:
enabled: false
endpoint: "hostname:4318"
urlpath: "/v1/traces"
compression: false
timeout: 10s
insecure: true
## @section Volume Permissions parameters
##
## Init containers parameters:
## volumePermissions: Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each node
##
volumePermissions:
## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume
##
enabled: false
## @param volumePermissions.image.registry [default: REGISTRY_NAME] Init container volume-permissions image registry
## @param volumePermissions.image.repository [default: REPOSITORY_NAME/os-shell] Init container volume-permissions image repository
## @skip volumePermissions.image.tag Init container volume-permissions image tag (immutable tags are recommended)
## @param volumePermissions.image.digest Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy
## @param volumePermissions.image.pullSecrets Init container volume-permissions image pull secrets
##
image:
registry: docker.io
repository: bitnami/os-shell
# tag: 12-debian-12-r16
tag: 11-debian-11-r91
digest: ""
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init container resource requests and limits
## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
## @param volumePermissions.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production).
## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
##
resourcesPreset: "none"
## @param volumePermissions.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
## Example:
## resources:
## requests:
## cpu: 2
## memory: 512Mi
## limits:
## cpu: 3
## memory: 1024Mi
##
resources: {}
## Init container' Security Context
## Note: the chown of the data folder is done to containerSecurityContext.runAsUser
## and not the below volumePermissions.containerSecurityContext.runAsUser
## @param volumePermissions.containerSecurityContext.enabled Enable init container Security Context
## @param volumePermissions.containerSecurityContext.seLinuxOptions [object,nullable] Set SELinux options in container
## @param volumePermissions.containerSecurityContext.runAsUser User ID for the init container
##
containerSecurityContext:
enabled: true
seLinuxOptions: null
runAsUser: null
## @section NGINX Parameters
##
nginx:
## Bitnami NGINX image
## ref: https://hub.docker.com/r/bitnami/nginx/tags/
## @param nginx.image.registry [default: REGISTRY_NAME] NGINX image registry
## @param nginx.image.repository [default: REPOSITORY_NAME/nginx] NGINX image repository
## @skip nginx.image.tag NGINX image tag (immutable tags are recommended)
## @param nginx.image.digest NGINX image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param nginx.image.pullPolicy NGINX image pull policy
## @param nginx.image.pullSecrets NGINX image pull secrets
## @param nginx.image.debug Enable NGINX image debug mode
##
image:
registry: docker.io
repository: bitnami/nginx
# tag: 1.25.4-debian-12-r3
tag: 1.25.3-debian-11-r1
digest: ""
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Enable debug mode
##
debug: false
## TLS parameters
##
tls:
## @param nginx.tls.enabled Enable TLS termination
##
enabled: true
## @param nginx.tls.existingSecret Existing secret name containing your own TLS certificates.
## The secret must contain the keys:
## `tls.crt` - the certificate (required),
## `tls.key` - the private key (required),
## `ca.crt` - CA certificate (optional)
## Self-signed TLS certificates will be used otherwise.
##
existingSecret: ""
## @param nginx.tls.commonName The common name used to generate the self-signed TLS certificates
##
commonName: core.harbor.domain
## @param nginx.behindReverseProxy If NGINX is behind another reverse proxy, set to true
## if the reverse proxy already provides the 'X-Forwarded-Proto' header field.
## This is, for example, the case for the OpenShift HAProxy router.
##
behindReverseProxy: false
## @param nginx.command Override default container command (useful when using custom images)
##
command: []
## @param nginx.args Override default container args (useful when using custom images)
##
args: []
## @param nginx.extraEnvVars Array with extra environment variables to add NGINX pods
##
extraEnvVars: []
## @param nginx.extraEnvVarsCM ConfigMap containing extra environment variables for NGINX pods
##
extraEnvVarsCM: ""
## @param nginx.extraEnvVarsSecret Secret containing extra environment variables (in case of sensitive data) for NGINX pods
##
extraEnvVarsSecret: ""
## @param nginx.containerPorts.http NGINX HTTP container port
## @param nginx.containerPorts.https NGINX HTTPS container port
##
containerPorts:
http: 8080
https: 8443
## @param nginx.replicaCount Number of NGINX replicas
##
replicaCount: 1
## Configure extra options for NGINX containers' liveness, readiness and startup probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
## @param nginx.livenessProbe.enabled Enable livenessProbe on NGINX containers
## @param nginx.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## @param nginx.livenessProbe.periodSeconds Period seconds for livenessProbe
## @param nginx.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param nginx.livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param nginx.livenessProbe.successThreshold Success threshold for livenessProbe
##
livenessProbe:
enabled: true
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## @param nginx.readinessProbe.enabled Enable readinessProbe on NGINX containers
## @param nginx.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param nginx.readinessProbe.periodSeconds Period seconds for readinessProbe
## @param nginx.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param nginx.readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param nginx.readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe:
enabled: true
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
## @param nginx.startupProbe.enabled Enable startupProbe on NGINX containers
## @param nginx.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
## @param nginx.startupProbe.periodSeconds Period seconds for startupProbe
## @param nginx.startupProbe.timeoutSeconds Timeout seconds for startupProbe
## @param nginx.startupProbe.failureThreshold Failure threshold for startupProbe
## @param nginx.startupProbe.successThreshold Success threshold for startupProbe
##
startupProbe:
enabled: false
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 15
successThreshold: 1
## @param nginx.customLivenessProbe Custom livenessProbe that overrides the default one
##
customLivenessProbe: {}
## @param nginx.customReadinessProbe Custom readinessProbe that overrides the default one
##
customReadinessProbe: {}
## @param nginx.customStartupProbe Custom startupProbe that overrides the default one
##
customStartupProbe: {}
## NGINX resource requests and limits
## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
## @param nginx.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if nginx.resources is set (nginx.resources is recommended for production).
## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
##
resourcesPreset: "none"
## @param nginx.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
## Example:
## resources:
## requests:
## cpu: 2
## memory: 512Mi
## limits:
## cpu: 3
## memory: 1024Mi
##
resources: {}
## Configure NGINX pods Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param nginx.podSecurityContext.enabled Enabled NGINX pods' Security Context
## @param nginx.podSecurityContext.fsGroupChangePolicy Set filesystem group change policy
## @param nginx.podSecurityContext.sysctls Set kernel settings using the sysctl interface
## @param nginx.podSecurityContext.supplementalGroups Set filesystem extra groups
## @param nginx.podSecurityContext.fsGroup Set NGINX pod's Security Context fsGroup
##
Name and Version
bitnami/harbor:20.1.3
What architecture are you using?
None
What steps will reproduce the bug?
Have dificult to expose URL Harbor in Openshift with route ClusterIP anyone can help me ?
Are you using any custom parameters or values?
What do you see instead?
Can't connect to Harbor with a Openshift route