bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.03k stars 9.22k forks source link

Harbor install failing due to references to non-existant redis.sentinel.* values. #14054

Closed jim-barber-he closed 1 year ago

jim-barber-he commented 1 year ago

Name and Version

bitnami/harbor 16.1.1

What steps will reproduce the bug?

In a Kubernetes 1.25.5 cluster deployed via kops 1.25.3 I am using helmfile and helm to deploy harbor for use as a container registry proxy.

When trying to deploy version 16.1.1 I get the following error:

  Error: template: harbor/templates/registry/registry-dpl.yaml:33:31: executing "harbor/templates/registry/registry-dpl.yaml" at <include (print $.Template.BasePath "/registry/registry-cm.yaml") .>: error calling include: template: harbor/templates/_helpers.tpl:211:112: executing "harbor.redis.host" at <.Values.redis.sentinel.service.ports.sentinel>: nil pointer evaluating interface {}.service

Are you using any custom parameters or values?

The values file we supply is below. This is for helmfile and the {{- }} directives are used by that to populate values based on secrets we export to the environment at deployment time.

{{- $affinity_type := "soft" }}
{{- if eq .Environment.Name "prod" }}
{{- $affinity_type := "hard" }}
{{- end }}

adminPassword: '{{ requiredEnv "ADMIN_PASSWORD" }}'

# Never enable this as an alternative to our existing ChartMuseum because Harbor has deprecated supplying this in future.
chartmuseum:
  enabled: false

core:
  # A string of 32 characters.
  {{- if env "CSRF_KEY" }}
  csrfKey: '{{ requiredEnv "CSRF_KEY" }}'
  {{- end }}
  podAntiAffinityPreset: '{{ $affinity_type }}'
  # The clusters need this running to make sure it can bump lower priority pods when necessary.
  priorityClassName: system-cluster-critical
  replicaCount: 2
  resources:
    limits:
      memory: 256Mi
    requests:
      cpu: 200m
      memory: 128Mi
  # Must be a string of 16 characters.
  secret: '{{ env "CORE_SECRET" | default (requiredEnv "SECRET_KEY") | b64enc }}'
  # Must be a string of 16 characters.
  secretKey: '{{ requiredEnv "SECRET_KEY" }}'
  # Harbor is critical to being able to pull images so we can't have it stuck if a node is tainted by Nidhogg for some services.
  # Nidhogg can taint nodes when required DaemonSets fail to pull their images because Harbor was unable to schedule to run.
  tolerations:
    - key: nidhogg.uswitch.com/fluent-bit.fluent-bit
      operator: Exists
      effect: NoSchedule
    - key: nidhogg.uswitch.com/kiam.kiam-agent
      operator: Exists
      effect: NoSchedule

exporter:
  replicaCount: 2
  resources:
    limits:
      memory: 32Mi
    requests:
      cpu: 10m
      memory: 16Mi

exposureType: ingress

externalDatabase:
  coreDatabase: '{{ requiredEnv "DB_DATABASE_CORE" }}'
  host: '{{ requiredEnv "DB_HOST" }}'
  # Remove the `if` statement if we enable the Harbor Notary.
  {{- if false }}
  notaryServerDatabase: '{{ requiredEnv "DB_DATABASE_NOTARY_SERVER" }}'
  notarySignerDatabase: '{{ requiredEnv "DB_DATABASE_NOTARY_SIGNER" }}'
  {{- end }}
  password: '{{ requiredEnv "DB_PASSWORD" }}'
  sslmode: require
  user: '{{ requiredEnv "DB_USERNAME" }}'

externalRedis:
    host: '{{ requiredEnv "REDIS_HOST" }}'

# Set to the same as expose.ingress.hosts.core
externalURL: 'https://harbor.{{ requiredEnv "PRIVATE_DOMAIN" }}'

ingress:
  core:
    hostname: 'harbor.{{ requiredEnv "PRIVATE_DOMAIN" }}'
    pathType: Prefix
  notary:
    hostname: 'harbor-notary.{{ requiredEnv "PRIVATE_DOMAIN" }}'
    pathType: Prefix

internalTLS:
  enabled: true

jobservice:
  jobLogger: database
  replicaCount: 2
  resources:
    limits:
      memory: 64Mi
    requests:
      cpu: 40m
      memory: 32Mi
  # Must be a string of 16 characters.
  secret: '{{ env "JOBSERVICE_SECRET" | default (requiredEnv "SECRET_KEY") | b64enc }}'

logLevel: info

metrics:
  enabled: true
  serviceMonitor:
    enabled: true

notary:
  enabled: false
  server:
    replicaCount: 2
    resources: {}
  signer:
    replicaCount: 2
    resources: {}

persistence:
  imageChartStorage:
    # Needs to be disabled for S3.
    disableredirect: true
    s3:
      bucket: 'harbor-{{ requiredEnv "AWS_ACCOUNT_ALIAS" }}'
      region: ap-southeast-2
    type: s3

portal:
  replicaCount: 2
  resources:
    limits:
      memory: 16Mi
    requests:
      cpu: 5m
      memory: 8Mi

postgresql:
  enabled: false

redis:
  enabled: false

registry:
  controller:
    resources:
      limits:
        memory: 256Mi
      requests:
        cpu: 100m
        memory: 32Mi
  credentials:
    # The value of htpasswd is to be produced like so:
    # - Set the REGISTRY_CREDENTIALS_USERNAME and REGISTRY_CREDENTIALS_PASSWORD environment variables to the correct values.
    # - Create the htpasswd value using the bcrypt algorithm:
    #     HTPASSWD="$(htpasswd -nbBC10 "$REGISTRY_CREDENTIALS_USERNAME" "$REGISTRY_CREDENTIALS_PASSWORD")"
    # - Put "$HTPASSWD" into the SSM parameter store (example for test and prod):
    #     ssm_put.sh --secure test harbor/registry_credentials_htpasswd "$HTPASSWD"
    #     ssm_put.sh --secure prod harbor/registry_credentials_htpasswd "$HTPASSWD"
    htpasswd: '{{ requiredEnv "REGISTRY_CREDENTIALS_HTPASSWD" }}'
    password: '{{ requiredEnv "REGISTRY_CREDENTIALS_PASSWORD" }}'
    username: '{{ requiredEnv "REGISTRY_CREDENTIALS_USERNAME" }}'
  podAnnotations:
    iam.amazonaws.com/role: kiam.harbor.harbor
  podAntiAffinityPreset: '{{ $affinity_type }}'
  replicaCount: 2
  # Must be a string of 16 characters.
  secret: '{{ env "REGISTRY_SECRET" | default (requiredEnv "SECRET_KEY") | b64enc }}'
  server:
    resources:
      limits:
        memory: 256Mi
      requests:
        cpu: 100m
        memory: 32Mi

trivy:
  enabled: false
  replicaCount: 2
  resources: {}

What is the expected behavior?

It should just deploy like version 16.0.4 of the chart does.

What do you see instead?

Helm shows the following error.

  Error: template: harbor/templates/registry/registry-dpl.yaml:33:31: executing "harbor/templates/registry/registry-dpl.yaml" at <include (print $.Template.BasePath "/registry/registry-cm.yaml") .>: error calling include: template: harbor/templates/_helpers.tpl:211:112: executing "harbor.redis.host" at <.Values.redis.sentinel.service.ports.sentinel>: nil pointer evaluating interface {}.service

Additional information

This error also happens in version 16.0.1 where I believe the breaking change occurs. Version 16.0.4 of the chart deploys without issue.

Part of the change in question is:

diff --git a/bitnami/harbor/templates/_helpers.tpl b/bitnami/harbor/templates/_helpers.tpl
index 5b58b6297..b232b7546 100644
--- a/bitnami/harbor/templates/_helpers.tpl
+++ b/bitnami/harbor/templates/_helpers.tpl
@@ -208,7 +208,7 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
 {{- end -}}

 {{- define "harbor.redis.host" -}}
-{{- ternary (printf "%s-master" (include "harbor.redis.fullname" .)) (ternary (printf "%s/%s" .Values.externalRedis.sentinel.hosts.Values.externalRedis.sentinel.masterSet) .Values.externalRedis.host .Values.externalRedis.sentinel.enabled) .Values.redis.enabled -}}
+{{- ternary (ternary (printf "%s/%s" (printf "%s-headless:%d" (include "harbor.redis.fullname" .) (int64 .Values.redis.sentinel.service.ports.sentinel)) .Values.redis.sentinel.masterSet) (printf "%s-master" (include "harbor.redis.fullname" .)) .Values.redis.sentinel.enabled) (ternary (printf "%s/%s" .Values.externalRedis.sentinel.hosts .Values.externalRedis.sentinel.masterSet) .Values.externalRedis.host .Values.externalRedis.sentinel.enabled) .Values.redis.enabled -}}
 {{- end -}}

 {{- define "harbor.redis.port" -}}

It is referring to variables like .Values.redis.sentinel.service.ports.sentinel, .Values.redis.sentinel.masterSet, and .Values.redis.sentinel.enabled However the values.yaml file that is part of the chart has not set any values for any of these, hence the nil pointer error. The redis section of the chart's values.yaml file contains only the following values:

redis:
  enabled: true
  auth:
    enabled: false
    ## Redis&reg; password (both master and slave). Defaults to a random 10-character alphanumeric string if not set and auth.enabled is true.
    ## It should always be set using the password value or in the existingSecret to avoid issues
    ## with Harbor.
    ## The password value is ignored if existingSecret is set
    ##
    password: ""
    existingSecret: ""
  architecture: standalone
fmulero commented 1 year ago

Great! Thanks a lot @jim-barber-he for such detailed explanation and for sharing all your findings.

You have very clear where is the problem so, would you like to contribute by creating a PR to solve the issue? The Bitnami team will be happy to review it and provide feedback. Here you can find the contributing guidelines.

jim-barber-he commented 1 year ago

The problem is I have no idea what default values should be set for some of these; especially redis.sentinel.service.ports.sentinel Some of the variables line up with externalRedis.sentinel so I could guess from those, but others do not have a corresponding match.

There is no documentation about redis.sentinel.* variables for the harbor Helm chart in the README.md file; nor in the values.yaml file; and nothing in the comments in the templates/_helpers.tpl file where these variables are used.

jim-barber-he commented 1 year ago

Actually, I'll give it a go as the default values are in the bitnami redis chart.

jim-barber-he commented 1 year ago

The pipeline for my PR has 2 skipped checks. I guess these are something that you guys kick off?

carrodher commented 1 year ago

Thanks for creating the PR, yes, those checks are executed once added the verify label which should be manually added by the reviewers. Now, the PR will be tested by installing the Helm chart on top of different k8s clusters, let's see the results 🤞