bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.82k stars 9.11k forks source link

[bitnami/nats] Healthcheck failed with jetstream #28549

Closed Reverier-Xu closed 3 weeks ago

Reverier-Xu commented 1 month ago

Name and Version

bitnami/nats 8.2.15

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. helm fetch bitnami/nats --untar
  2. modify values.yaml to enable jetstream
  3. helm install platform-queue ./nats

Are you using any custom parameters or values?

here is my values.yaml:

# Copyright Broadcom, Inc. All Rights Reserved.
# SPDX-License-Identifier: APACHE-2.0

## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass

## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.defaultStorageClass Global default StorageClass for Persistent Volume(s)
##
global:
  imageRegistry: ""
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  imagePullSecrets: []
  defaultStorageClass: "storage-queue"
  ## Compatibility adaptations for Kubernetes platforms
  ##
  compatibility:
    ## Compatibility adaptations for Openshift
    ##
    openshift:
      ## @param global.compatibility.openshift.adaptSecurityContext Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)
      ##
      adaptSecurityContext: auto
## @section Common parameters

## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set)
##
kubeVersion: ""
## @param nameOverride String to partially override common.names.fullname template (will maintain the release name)
##
nameOverride: ""
## @param fullnameOverride String to fully override common.names.fullname template
##
fullnameOverride: ""
## @param commonLabels Add labels to all the deployed resources
##
commonLabels: {}
## @param commonAnnotations Add annotations to all the deployed resources
##
commonAnnotations: {}
## @param clusterDomain Kubernetes Cluster Domain
##
clusterDomain: cluster.local
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy: []
## Enable diagnostic mode in the deployment
## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
## @param diagnosticMode.command Command to override all containers in the deployment
## @param diagnosticMode.args Args to override all containers in the deployment
##
diagnosticMode:
  enabled: false
  command:
    - sleep
  args:
    - infinity
## @section NATS parameters

## Bitnami NATS image version
## ref: https://hub.docker.com/r/bitnami/nats/tags/
## @param image.registry [default: REGISTRY_NAME] NATS image registry
## @param image.repository [default: REPOSITORY_NAME/nats] NATS image repository
## @skip image.tag NATS image tag (immutable tags are recommended)
## @param image.digest NATS image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param image.pullPolicy NATS image pull policy
## @param image.pullSecrets NATS image pull secrets
## @param image.debug Enable NATS image debug mode
##
image:
  registry: docker.io
  repository: bitnami/nats
  tag: 2.10.18-debian-12-r2
  digest: ""
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ## Example:
  ## pullSecrets:
  ##   - myRegistryKeySecretName
  ##
  pullSecrets: []
  ## Enable debug mode
  ##
  debug: false
## Client Authentication
## ref: https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro
## @param auth.enabled Switch to enable/disable client authentication
## @param auth.user Client authentication user
## @param auth.password Client authentication password
## @param auth.token Client authentication token
## @param auth.timeout Client authentication timeout (seconds)
## @param auth.usersCredentials Client authentication users credentials collection
## Example:
## auth.usersCredentials:
##   - username: "a"
##     password: "b"
## @param auth.noAuthUser Client authentication username from auth.usersCredentials map to be used when no credentials provided
##
auth:
  enabled: true
  user: platform
  password: "[removed]"
  token: "[removed]"
  timeout: 1
  usersCredentials: []
  noAuthUser: ""
## Cluster Configuration
## ref: https://docs.nats.io/running-a-nats-service/configuration/clustering/cluster_config
##
cluster:
  ## @param cluster.name Cluster name
  name: nats
  ## @param cluster.connectRetries Configure number of connect retries for implicit routes, otherwise leave blank
  ##
  connectRetries: ""
  ## Cluster Authentication
  ## ref: https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro
  ## @param cluster.auth.enabled Switch to enable/disable cluster authentication
  ## @param cluster.auth.user Cluster authentication user
  ## @param cluster.auth.password Cluster authentication password
  ##
  auth:
    enabled: true
    user: queue-cluster
    password: "[removed]"
## JetStream Configuration
## ref: https://docs.nats.io/running-a-nats-service/configuration/resource_management
## @param jetstream.enabled Switch to enable/disable JetStream
## @param jetstream.maxMemory Max memory usage for JetStream
##
jetstream:
  enabled: true
  maxMemory: 512M
## Logging parameters
## ref: https://github.com/nats-io/gnatsd#command-line-arguments
## @param debug.enabled Switch to enable/disable debug on logging
## @param debug.trace Switch to enable/disable trace debug level on logging
## @param debug.logtime Switch to enable/disable logtime on logging
##
debug:
  enabled: false
  trace: false
  logtime: false
## System override parameters
## ref: https://docs.nats.io/nats-server/configuration
## @param maxConnections Max. number of client connections
## @param maxControlLine Max. protocol control line
## @param maxPayload Max. payload
## @param writeDeadline Duration the server can block on a socket write to a client
## e.g:
## maxConnections: 100
## maxControlLine: 512
## maxPayload: 65536
## writeDeadline: "2s"
##
maxConnections: ""
maxControlLine: ""
maxPayload: ""
writeDeadline: ""
## @param natsFilename Filename used by several NATS files (binary, configuration file, and pid file)
## Nats filenames:
##  - For Nats 1.x.x version, some filenames (binary, configuration file, pid file) uses `gnatsd` as part of the name.
##  - For Nats 2.x.x version, those filenames now uses `nats-server`
## In order to make the chart compatible with NATS versions 1.0.0 and 2.0.0 we have parametrized the following value
## to specify the proper filename according to the image version.
##
natsFilename: nats-server
## @param configuration [string] Specify content for NATS configuration file (generated based on other parameters otherwise)
## e.g:
## configuration: |-
##   listen: 0.0.0.0:6222
##   http: 0.0.0.0:8222
##   ...
##
configuration: |-
  {{- $authPwd := default (include "nats.randomPassword" .) .Values.auth.password -}}
  {{- $clusterAuthPwd := default (include "nats.randomPassword" .) .Values.cluster.auth.password -}}
  {{- if eq .Values.resourceType "statefulset" }}
  server_name: $NATS_SERVER_NAME
  {{- end }}
  listen: 0.0.0.0:{{ .Values.containerPorts.client }}
  http: 0.0.0.0:{{ .Values.containerPorts.monitoring }}

  # Authorization for client connections
  {{- if .Values.auth.enabled }}
  authorization {
    {{- if .Values.auth.user }}
    user: {{ .Values.auth.user | quote }}
    password: {{ $authPwd | quote }}
    {{- else if .Values.auth.token }}
    token: {{ .Values.auth.token | quote }}
    {{- else if .Values.auth.usersCredentials }}
    users: [
      {{- range $user := .Values.auth.usersCredentials }}
        { user: {{ $user.username | quote }}, password: {{ $user.password | quote }} },
      {{- end }}
    ],
    {{- end }}
    timeout:  {{ int .Values.auth.timeout }}
  }
  {{- if .Values.auth.noAuthUser }}
  no_auth_user: {{ .Values.auth.noAuthUser | quote }}
  {{- end }}
  {{- end }}

  # Logging options
  debug: {{ ternary "true" "false" (or .Values.debug.enabled .Values.diagnosticMode.enabled) }}
  trace: {{ ternary "true" "false" (or .Values.debug.trace .Values.diagnosticMode.enabled) }}
  logtime: {{ ternary "true" "false" (or .Values.debug.logtime .Values.diagnosticMode.enabled) }}
  # Pid file
  pid_file: "/opt/bitnami/nats/tmp/{{ .Values.natsFilename }}.pid"

  # Some system overrides
  {{- if .Values.maxConnections }}
  max_connections: {{ int .Values.maxConnections }}
  {{- end }}
  {{- if .Values.maxControlLine }}
  max_control_line: {{ int .Values.maxControlLine }}
  {{- end }}
  {{- if .Values.maxPayload }}
  max_payload: {{ int .Values.maxPayload }}
  {{- end }}
  {{- if .Values.writeDeadline }}
  write_deadline: {{ .Values.writeDeadline | quote }}
  {{- end }}

  # Clustering definition
  cluster {
    name: {{ .Values.cluster.name | quote }}
    listen: 0.0.0.0:{{ .Values.containerPorts.cluster }}
    {{- if .Values.cluster.auth.enabled }}
    # Authorization for cluster connections
    authorization {
      user: {{ .Values.cluster.auth.user | quote }}
      password: {{ $clusterAuthPwd | quote }}
      timeout:  1
    }
    {{- end }}
    # Routes are actively solicited and connected to from this server.
    # Other servers can connect to us if they supply the correct credentials
    # in their routes definitions from above
    routes = [
      {{- if .Values.cluster.auth.enabled }}
      nats://{{ .Values.cluster.auth.user }}:{{ $clusterAuthPwd }}@{{ include "common.names.fullname" . }}:{{ .Values.service.ports.cluster }}
      {{- else }}
      nats://{{ template "common.names.fullname" . }}:{{ .Values.service.ports.cluster }}
      {{- end }}
    ]
    {{- if .Values.cluster.connectRetries }}
    # Configure number of connect retries for implicit routes
    connect_retries: {{ .Values.cluster.connectRetries }}
    {{- end }}
  }

  {{- if .Values.jetstream.enabled }}
  # JetStream configuration
  jetstream: enabled
  jetstream {
    store_dir: /data/jetstream
    max_memory_store: {{ .Values.jetstream.maxMemory }}
    max_file_store: {{ .Values.persistence.size }}
  }
  {{- end }}
## @param existingSecret The name of an existing Secret with your custom configuration for NATS
## NOTE: When it's set the configuration parameter is ignored
##
existingSecret: ""
## @param command Override default container command (useful when using custom images)
##
command: []
## @param args Override default container args (useful when using custom images)
##
args: []
## @param extraEnvVars Extra environment variables to be set on NATS container
## Example:
## extraEnvVars:
##   - name: FOO
##     value: "bar"
##
extraEnvVars: []
## @param extraEnvVarsCM ConfigMap with extra environment variables
##
extraEnvVarsCM: ""
## @param extraEnvVarsSecret Secret with extra environment variables
##
extraEnvVarsSecret: ""
## @section NATS deployment/statefulset parameters

## @param resourceType NATS cluster resource type under Kubernetes. Allowed values: `statefulset` (default) or `deployment`
## ref:
## - https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
## - https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
##
resourceType: statefulset
## @param replicaCount Number of NATS nodes
##
replicaCount: 1
## @param schedulerName Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param priorityClassName Name of pod priority class
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
## Strategy to use to update Pods
##
updateStrategy:
  ## @param updateStrategy.type StrategyType. Can be set to RollingUpdate or OnDelete
  ##
  type: RollingUpdate
## @param containerPorts.client NATS client container port
## @param containerPorts.cluster NATS cluster container port
## @param containerPorts.monitoring NATS monitoring container port
##
containerPorts:
  client: 4222
  cluster: 6222
  monitoring: 8222
## Configure Pods Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enabled NATS pods' Security Context
## @param podSecurityContext.fsGroupChangePolicy Set filesystem group change policy
## @param podSecurityContext.sysctls Set kernel settings using the sysctl interface
## @param podSecurityContext.supplementalGroups Set filesystem extra groups
## @param podSecurityContext.fsGroup Set NATS pod's Security Context fsGroup
##
podSecurityContext:
  enabled: true
  fsGroupChangePolicy: Always
  sysctls: []
  supplementalGroups: []
  fsGroup: 1001
## Configure Container Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param containerSecurityContext.enabled Enabled containers' Security Context
## @param containerSecurityContext.seLinuxOptions [object,nullable] Set SELinux options in container
## @param containerSecurityContext.runAsUser Set containers' Security Context runAsUser
## @param containerSecurityContext.runAsGroup Set containers' Security Context runAsGroup
## @param containerSecurityContext.runAsNonRoot Set container's Security Context runAsNonRoot
## @param containerSecurityContext.privileged Set container's Security Context privileged
## @param containerSecurityContext.readOnlyRootFilesystem Set container's Security Context readOnlyRootFilesystem
## @param containerSecurityContext.allowPrivilegeEscalation Set container's Security Context allowPrivilegeEscalation
## @param containerSecurityContext.capabilities.drop List of capabilities to be dropped
## @param containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile
##
containerSecurityContext:
  enabled: true
  seLinuxOptions: null
  runAsUser: 1001
  runAsGroup: 1001
  runAsNonRoot: true
  privileged: false
  readOnlyRootFilesystem: true
  allowPrivilegeEscalation: false
  capabilities:
    drop: ["ALL"]
  seccompProfile:
    type: "RuntimeDefault"
## NATS resource requests and limits
## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
##
resourcesPreset: "nano"
## @param resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
## Example:
## resources:
##   requests:
##     cpu: 2
##     memory: 512Mi
##   limits:
##     cpu: 3
##     memory: 1024Mi
##
resources: {}
## NATS containers' liveness probe.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
## @param livenessProbe.enabled Enable livenessProbe
## @param livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## @param livenessProbe.periodSeconds Period seconds for livenessProbe
## @param livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param livenessProbe.successThreshold Success threshold for livenessProbe
##
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
## Configure extra options for readiness probe
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
## @param readinessProbe.enabled Enable readinessProbe
## @param readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param readinessProbe.periodSeconds Period seconds for readinessProbe
## @param readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe:
  enabled: true
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
## @param startupProbe.enabled Enable startupProbe on NATS containers
## @param startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
## @param startupProbe.periodSeconds Period seconds for startupProbe
## @param startupProbe.timeoutSeconds Timeout seconds for startupProbe
## @param startupProbe.failureThreshold Failure threshold for startupProbe
## @param startupProbe.successThreshold Success threshold for startupProbe
##
startupProbe:
  enabled: false
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
## @param customLivenessProbe Override default liveness probe
##
customLivenessProbe: {}
## @param customReadinessProbe Override default readiness probe
##
customReadinessProbe: {}
## @param customStartupProbe Custom startupProbe that overrides the default one
##
customStartupProbe: {}
## @param automountServiceAccountToken Mount Service Account token in pod
##
automountServiceAccountToken: false
## @param hostAliases Deployment pod host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param podLabels Extra labels for NATS pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param podAnnotations Annotations for NATS pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
nodeAffinityPreset:
  ## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
  ##
  type: ""
  ## @param nodeAffinityPreset.key Node label key to match. Ignored if `affinity` is set.
  ## E.g.
  ## key: "kubernetes.io/e2e-az-name"
  ##
  key: ""
  ## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set.
  ## E.g.
  ## values:
  ##   - e2e-az1
  ##   - e2e-az2
  ##
  values: []
## @param affinity Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## @param nodeSelector Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
##
nodeSelector: {}
## @param tolerations Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param topologySpreadConstraints Topology Spread Constraints for NATS pods assignment spread across your cluster among failure-domains
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods
##
topologySpreadConstraints: []
## @param lifecycleHooks for the NATS container(s) to automate configuration before or after startup
##
lifecycleHooks: {}
## @param extraVolumes Optionally specify extra list of additional volumes for NATS pods
##
extraVolumes: []
## @param extraVolumeMounts Optionally specify extra list of additional volumeMounts for NATS container(s)
##
extraVolumeMounts: []
## @param initContainers Add additional init containers to the NATS pods
## Example:
## initContainers:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
initContainers: []
## @param sidecars Add additional sidecar containers to the NATS pods
## Example:
## sidecars:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
sidecars: []
## Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
  ## @param serviceAccount.create Enable creation of ServiceAccount for NATS pod
  ##
  create: true
  ## @param serviceAccount.name The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the common.names.fullname template
  ##
  name: ""
  ## @param serviceAccount.automountServiceAccountToken Allows auto mount of ServiceAccountToken on the serviceAccount created
  ## Can be set to false if pods using this serviceAccount do not need to use K8s API
  ##
  automountServiceAccountToken: false
  ## @param serviceAccount.annotations Additional custom annotations for the ServiceAccount
  ##
  annotations: {}
## @section Traffic Exposure parameters

## NATS service parameters
##
service:
  ## @param service.type NATS service type
  ##
  type: ClusterIP
  ## @param service.ports.client NATS client service port
  ## @param service.ports.cluster NATS cluster service port
  ## @param service.ports.monitoring NATS monitoring service port
  ##
  ports:
    client: 4222
    cluster: 6222
    monitoring: 8222
  ## Node ports to expose
  ## @param service.nodePorts.client Node port for clients
  ## @param service.nodePorts.cluster Node port for clustering
  ## @param service.nodePorts.monitoring Node port for monitoring
  ## NOTE: choose port between <30000-32767>
  ##
  nodePorts:
    client: ""
    cluster: ""
    monitoring: ""
  ## @param service.sessionAffinity Control where client requests go, to the same pod or round-robin
  ## Values: ClientIP or None
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/
  ##
  sessionAffinity: None
  ## @param service.sessionAffinityConfig Additional settings for the sessionAffinity
  ## sessionAffinityConfig:
  ##   clientIP:
  ##     timeoutSeconds: 300
  ##
  sessionAffinityConfig: {}
  ## @param service.clusterIP NATS service Cluster IP
  ## e.g.:
  ## clusterIP: None
  ##
  clusterIP: ""
  ## @param service.loadBalancerIP NATS service Load Balancer IP
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
  ##
  loadBalancerIP: ""
  ## @param service.loadBalancerSourceRanges NATS service Load Balancer sources
  ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ## e.g:
  ## loadBalancerSourceRanges:
  ##   - 10.10.10.0/24
  ##
  loadBalancerSourceRanges: []
  ## @param service.externalTrafficPolicy NATS service external traffic policy
  ## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
  ##
  externalTrafficPolicy: Cluster
  ## @param service.annotations Additional custom annotations for NATS service
  ##
  annotations: {}
  ## @param service.extraPorts Extra ports to expose in the NATS service (normally used with the `sidecar` value)
  ##
  extraPorts: []
  ## Headless service properties
  ##
  headless:
    ## @param service.headless.annotations Annotations for the headless service.
    ##
    annotations: {}
## NATS ingress parameters
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  ## @param ingress.enabled Set to true to enable ingress record generation
  ##
  enabled: false
  ## @param ingress.pathType Ingress Path type
  ##
  pathType: ImplementationSpecific
  ## @param ingress.apiVersion Override API Version (automatically detected if not set)
  ##
  apiVersion: ""
  ## @param ingress.hostname When the ingress is enabled, a host pointing to this will be created
  ##
  hostname: nats.local
  ## @param ingress.path The Path to NATS. You may need to set this to '/*' in order to use this with ALB ingress controllers.
  ##
  path: /
  ## @param ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)
  ## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster.
  ## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
  ##
  ingressClassName: ""
  ## @param ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
  ## Use this parameter to set the required annotations for cert-manager, see
  ## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
  ## e.g:
  ## annotations:
  ##   kubernetes.io/ingress.class: nginx
  ##   cert-manager.io/cluster-issuer: cluster-issuer-name
  ##
  annotations: {}
  ## @param ingress.tls Enable TLS configuration for the host defined at `ingress.hostname` parameter
  ## TLS certificates will be retrieved from a TLS secret with name: `{{- printf "%s-tls" .Values.ingress.hostname }}`
  ## You can:
  ##   - Use the `ingress.secrets` parameter to create this TLS secret
  ##   - Rely on cert-manager to create it by setting the corresponding annotations
  ##   - Rely on Helm to create self-signed certificates by setting `ingress.selfSigned=true`
  ##
  tls: false
  ## @param ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm
  ##
  selfSigned: false
  ## @param ingress.extraHosts The list of additional hostnames to be covered with this ingress record.
  ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
  ## extraHosts:
  ## - name: nats.local
  ##   path: /
  ##
  extraHosts: []
  ## @param ingress.extraPaths Any additional arbitrary paths that may need to be added to the ingress under the main host.
  ## For example: The ALB ingress controller requires a special rule for handling SSL redirection.
  ## extraPaths:
  ## - path: /*
  ##   backend:
  ##     serviceName: ssl-redirect
  ##     servicePort: use-annotation
  ##
  extraPaths: []
  ## @param ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record.
  ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
  ## extraTls:
  ## - hosts:
  ##     - nats.local
  ##   secretName: nats.local-tls
  ##
  extraTls: []
  ## @param ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets
  ## key and certificate should start with -----BEGIN CERTIFICATE----- or
  ## -----BEGIN RSA PRIVATE KEY-----
  ##
  ## name should line up with a tlsSecret set further up
  ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
  ##
  ## It is also possible to create and manage the certificates outside of this helm chart
  ## Please see README.md for more information
  ## e.g:
  ## - name: nats.local-tls
  ##   key:
  ##   certificate:
  ##
  secrets: []
  ## @param ingress.extraRules Additional rules to be covered with this ingress record
  ## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
  ## e.g:
  ## extraRules:
  ## - host: nats.local
  ##     http:
  ##       path: /
  ##       backend:
  ##         service:
  ##           name: nats-svc
  ##           port:
  ##             name: http
  ##
  extraRules: []
## Network Policy configuration
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
networkPolicy:
  ## @param networkPolicy.enabled Enable creation of NetworkPolicy resources
  ##
  enabled: true
  ## @param networkPolicy.allowExternal The Policy model to apply
  ## When set to false, only pods with the correct client label will have network access to the ports NATS is
  ## listening on. When true, NATS will accept connections from any source (with the correct destination port).
  ##
  allowExternal: true
  ## @param networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations.
  ##
  allowExternalEgress: true
  ## @param networkPolicy.extraIngress [array] Add extra ingress rules to the NetworkPolicy
  ## e.g:
  ## extraIngress:
  ##   - ports:
  ##       - port: 1234
  ##     from:
  ##       - podSelector:
  ##           - matchLabels:
  ##               - role: frontend
  ##       - podSelector:
  ##           - matchExpressions:
  ##               - key: role
  ##                 operator: In
  ##                 values:
  ##                   - frontend
  ##
  extraIngress: []
  ## @param networkPolicy.extraEgress [array] Add extra ingress rules to the NetworkPolicy
  ## e.g:
  ## extraEgress:
  ##   - ports:
  ##       - port: 1234
  ##     to:
  ##       - podSelector:
  ##           - matchLabels:
  ##               - role: frontend
  ##       - podSelector:
  ##           - matchExpressions:
  ##               - key: role
  ##                 operator: In
  ##                 values:
  ##                   - frontend
  ##
  extraEgress: []
  ## @param networkPolicy.ingressNSMatchLabels [object] Labels to match to allow traffic from other namespaces
  ## @param networkPolicy.ingressNSPodMatchLabels [object] Pod labels to match to allow traffic from other namespaces
  ##
  ingressNSMatchLabels: {}
  ingressNSPodMatchLabels: {}
## @section Metrics parameters

## Metrics / Prometheus NATS Exporter
## ref: https://github.com/nats-io/prometheus-nats-exporter
##
metrics:
  ## @param metrics.enabled Enable Prometheus metrics via exporter side-car
  ##
  enabled: false
  ## @param metrics.image.registry [default: REGISTRY_NAME] Prometheus metrics exporter image registry
  ## @param metrics.image.repository [default: REPOSITORY_NAME/nats-exporter] Prometheus metrics exporter image repository
  ## @skip metrics.image.tag Prometheus metrics exporter image tag (immutable tags are recommended)
  ## @param metrics.image.digest NATS Exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
  ## @param metrics.image.pullPolicy Prometheus metrics image pull policy
  ## @param metrics.image.pullSecrets Prometheus metrics image pull secrets
  ##
  image:
    registry: docker.io
    repository: bitnami/nats-exporter
    tag: 0.15.0-debian-12-r10
    digest: ""
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## @param metrics.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production).
  ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
  ##
  resourcesPreset: "nano"
  ## @param metrics.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
  ## Example:
  ## resources:
  ##   requests:
  ##     cpu: 2
  ##     memory: 512Mi
  ##   limits:
  ##     cpu: 3
  ##     memory: 1024Mi
  ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
  ##
  resources: {}
  ## @param metrics.containerPorts.http Prometheus metrics exporter port
  ##
  containerPorts:
    http: 7777
  ## @param metrics.flags [array] Flags to be passed to Prometheus metrics
  ##
  flags:
    - -connz
    - -routez
    - -subz
    - -varz
  ## Metrics service configuration
  ##
  service:
    ## @param metrics.service.type Kubernetes service type (`ClusterIP`, `NodePort` or `LoadBalancer`)
    ##
    type: ClusterIP
    ## @param metrics.service.port Prometheus metrics service port
    ##
    port: 7777
    ## @param metrics.service.loadBalancerIP Use serviceLoadBalancerIP to request a specific static IP, otherwise leave blank
    ##
    loadBalancerIP: ""
    ## @param metrics.service.annotations [object] Annotations for Prometheus metrics service
    ##
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "{{ .Values.metrics.service.port }}"
    ## @param metrics.service.labels Labels for Prometheus metrics service
    ##
    labels: {}
  ## Prometheus Operator ServiceMonitor configuration
  ##
  serviceMonitor:
    ## @param metrics.serviceMonitor.enabled Specify if a ServiceMonitor will be deployed for Prometheus Operator
    ##
    enabled: false
    ## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
    ##
    namespace: monitoring
    ## @param metrics.serviceMonitor.labels Extra labels for the ServiceMonitor
    ##
    labels: {}
    ## @param metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in Prometheus
    ##
    jobLabel: ""
    ## @param metrics.serviceMonitor.interval How frequently to scrape metrics
    ## e.g:
    ## interval: 10s
    ##
    interval: ""
    ## @param metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended
    ## e.g:
    ## scrapeTimeout: 10s
    ##
    scrapeTimeout: ""
    ## @param metrics.serviceMonitor.metricRelabelings [array] Specify additional relabeling of metrics
    ##
    metricRelabelings: []
    ## @param metrics.serviceMonitor.relabelings [array] Specify general relabeling
    ##
    relabelings: []
    ## @param metrics.serviceMonitor.selector Prometheus instance selector labels
    ## ref: https://github.com/bitnami/charts/tree/main/bitnami/prometheus-operator#prometheus-configuration
    ##
    selector: {}
## @section Persistence parameters

## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
##
persistence:
  ## @param persistence.enabled Enable NATS data persistence using PVC(s)
  ##
  enabled: true
  ## @param persistence.storageClass PVC Storage Class for NATS data volume
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner
  ##
  storageClass: "storage-queue"
  ## @param persistence.accessModes PVC Access modes
  ##
  accessModes:
    - ReadWriteOnce
  ## @param persistence.size PVC Storage Request for NATS data volume
  ##
  size: 10Gi
  ## @param persistence.annotations Annotations for the PVC
  ##
  annotations: {}
  ## @param persistence.selector Selector to match an existing Persistent Volume for NATS's data PVC
  ## If set, the PVC can't have a PV dynamically provisioned for it
  ## E.g.
  ## selector:
  ##   matchLabels:
  ##     app: my-app
  ##
  selector: {}
## @section Other parameters

## NATS Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
  ## @param pdb.create Enable/disable a Pod Disruption Budget creation
  ##
  create: true
  ## @param pdb.minAvailable Minimum number/percentage of pods that should remain scheduled
  ##
  minAvailable: ""
  ## @param pdb.maxUnavailable Maximum number/percentage of pods that may be made unavailable
  ##
  maxUnavailable: ""

What is the expected behavior?

The server runs normally,

What do you see instead?

pods failed with status CrashLoopBackOff, pod log is here:

nats 14:47:59.94 INFO  ==>
nats 14:47:59.94 INFO  ==> Welcome to the Bitnami nats container
nats 14:47:59.94 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers
nats 14:48:00.03 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
nats 14:48:00.03 INFO  ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise
nats 14:48:00.04 INFO  ==>
nats 14:48:00.04 INFO  ==> ** Starting NATS setup **
nats 14:48:00.24 WARN  ==> A custom configuration file "nats-server.conf" was found or the file is not writable. Configurations based on environment variables will not be applied for this file.
nats 14:48:00.33 INFO  ==> Initializing NATS
nats 14:48:00.34 INFO  ==> Custom configuration files detected, using them
nats 14:48:00.34 INFO  ==> ** NATS setup finished! **

nats 14:48:00.44 INFO  ==> ** Starting NATS **
[1] [INF] Starting nats-server
[1] [INF]   Version:  2.10.18
[1] [INF]   Git:      [57d23ac]
[1] [INF]   Cluster:  ret2shell
[1] [INF]   Name:     ret2shell-queue-nats-0
[1] [INF]   Node:     M2b4ksaZ
[1] [INF]   ID:       NAOXB5DJWXFAF6BOCD4NWJJZ55T22QNMPIQP7X4NYEYL4CU4OKOQYQS5
[1] [WRN] Plaintext passwords detected, use nkeys or bcrypt
[1] [INF] Using configuration file: /opt/bitnami/nats/conf/nats-server.conf
[1] [INF] Starting http monitor on 0.0.0.0:8222
[1] [INF] Starting JetStream
[1] [INF]     _ ___ _____ ___ _____ ___ ___   _   __  __
[1] [INF]  _ | | __|_   _/ __|_   _| _ \ __| /_\ |  \/  |
[1] [INF] | || | _|  | | \__ \ | | |   / _| / _ \| |\/| |
[1] [INF]  \__/|___| |_| |___/ |_| |_|_\___/_/ \_\_|  |_|
[1] [INF]
[1] [INF]          https://docs.nats.io/jetstream
[1] [INF]
[1] [INF] ---------------- JETSTREAM ----------------
[1] [INF]   Max Memory:      488.28 MB
[1] [INF]   Max Storage:     10.00 GB
[1] [INF]   Store Directory: "/data/jetstream/jetstream"
[1] [INF] -------------------------------------------
[1] [INF] Starting JetStream cluster
[1] [INF] Creating JetStream metadata controller
[1] [INF] JetStream cluster recovering state
[1] [INF] Listening for client connections on 0.0.0.0:4222
[1] [INF] Server is ready
[1] [INF] Cluster name is ret2shell
[1] [INF] Listening for route connections on 0.0.0.0:6222
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Healthcheck failed: "JetStream has not established contact with a meta leader"
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [ERR] Error trying to connect to route (attempt 1): lookup for host "ret2shell-queue-nats": lookup ret2shell-queue-nats on 10.43.0.10:53: read udp 10.42.0.57:48502->10.43.0.10:53: i/o timeout
[1] [ERR] Error trying to connect to route (attempt 1): lookup for host "ret2shell-queue-nats": lookup ret2shell-queue-nats on 10.43.0.10:53: read udp 10.42.0.57:46269->10.43.0.10:53: i/o timeout
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Healthcheck failed: "JetStream has not established contact with a meta leader"
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Healthcheck failed: "JetStream has not established contact with a meta leader"
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Healthcheck failed: "JetStream has not established contact with a meta leader"
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Healthcheck failed: "JetStream has not established contact with a meta leader"
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Waiting for routing to be established...
[1] [WRN] Healthcheck failed: "JetStream has not established contact with a meta leader"
[1] [INF] Initiating Shutdown...
[1] [INF] Initiating JetStream Shutdown...
[1] [INF] JetStream Shutdown

Additional information

No response

carrodher commented 1 month ago

The issue may not be directly related to the Bitnami container image/Helm chart, but rather to how the application is being utilized, configured in your specific environment, or tied to a specific scenario that is not easy to reproduce on our side.

If you think that's not the case and are interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.

Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.

Suppose you have any questions about the application, customizing its content, or technology and infrastructure usage. In that case, we highly recommend that you refer to the forums and user guides provided by the project responsible for the application or technology.

With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.

github-actions[bot] commented 3 weeks ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 3 weeks ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.