netbox-community / netbox-chart

A Helm chart for NetBox
https://netbox.readthedocs.io/
Apache License 2.0
244 stars 149 forks source link

<class 'redis.sentinel.SlaveNotFoundError'> #137

Closed ghost closed 1 year ago

ghost commented 1 year ago

Hi!

I am creating a new issue for this since none of the old articles here solved my issue. I deployed netbox using bootc/netbox-cart with external servers bitnami/redis (repo) & bitnami/postgresql (repo) all running in the same namespace. The deployment is working but every 4-5mins an error pops up in one of the pods that says netbox redis.sentinel.SlaveNotFoundError: No slave found for ‘redis’. We get the page below when it happens. image

On kubectl events, we get the following:

 LAST SEEN↓                  TYPE                     REASON                                OBJECT                                                                     COUNT                    
 4m12s                      Warning                  Unhealthy                               pod/netbox-8585d6d74f-fhtmj                                                33                      
 28s                        Warning                  Unhealthy                               pod/netbox-8585d6d74f-58gsp                                                2                       

Any ideas on what to check? or how to fix this?

Additional info for the netbox pods: Logs:

 netbox Internal Server Error: /login/
 netbox Traceback (most recent call last):
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/django_redis/cache.py", line 31, in _decorator
 netbox     return method(self, *args, **kwargs)
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/django_redis/cache.py", line 98, in _get
 netbox     return self.client.get(key, default=default, version=version, client=client)  
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/django_redis/client/default.py", line 260, in get    
 netbox     raise ConnectionInterrupted(connection=client) from e 
 netbox django_redis.exceptions.ConnectionInterrupted: Redis SlaveNotFoundError: No slave found for 'redis' 
 netbox 
 netbox During handling of the above exception, another exception occurred:   
 netbox 
 netbox Traceback (most recent call last):
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner
 netbox     response = get_response(request)    
 netbox   File "/opt/netbox/netbox/netbox/middleware.py", line 133, in __call__     
 netbox     with change_logging(request): 
 netbox   File "/usr/lib/python3.10/contextlib.py", line 142, in __exit__     
 netbox     next(self.gen)
 netbox   File "/opt/netbox/netbox/extras/context_managers.py", line 21, in change_logging
 netbox     flush_webhooks(webhooks_queue.get())
 netbox   File "/opt/netbox/netbox/extras/webhooks.py", line 83, in flush_webhooks  
 netbox     rq_queue_name = get_config().QUEUE_MAPPINGS.get('webhook', RQ_QUEUE_DEFAULT)  
 netbox   File "/opt/netbox/netbox/netbox/config/__init__.py", line 27, in get_config     
 netbox     _thread_locals.config = Config()    
 netbox   File "/opt/netbox/netbox/netbox/config/__init__.py", line 47, in __init__ 
 netbox     self._populate_from_cache()   
 netbox   File "/opt/netbox/netbox/netbox/config/__init__.py", line 70, in _populate_from_cache 
 netbox     self.config = cache.get('config') or {}   
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/django_redis/cache.py", line 91, in get 
 netbox     value = self._get(key, default, version, client)
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/django_redis/cache.py", line 38, in _decorator
 netbox     raise e.__cause__   
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/django_redis/client/default.py", line 258, in get    
 netbox     value = client.get(key)   
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/redis/commands/core.py", line 1728, in get    
 netbox     return self.execute_command("GET", name)  
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1255, in execute_command
 netbox     conn = self.connection or pool.get_connection(command_name, **options)  
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/redis/connection.py", line 1389, in get_connection   
 netbox     connection.connect()
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/redis/sentinel.py", line 54, in connect 
 netbox     return self.retry.call_with_retry(self._connect_retry, lambda error: None)    
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/redis/retry.py", line 51, in call_with_retry  
 netbox     raise error   
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/redis/retry.py", line 46, in call_with_retry  
 netbox     return do()   
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/redis/sentinel.py", line 46, in _connect_retry
 netbox     for slave in self.connection_pool.rotate_slaves():    
 netbox   File "/opt/netbox/venv/lib/python3.10/site-packages/redis/sentinel.py", line 138, in rotate_slaves
 netbox     raise SlaveNotFoundError(f"No slave found for {self.service_name!r}")   
 netbox redis.sentinel.SlaveNotFoundError: No slave found for 'redis'   
 netbox 127.0.0.6 - - [02/Feb/2023:01:01:02 +0000] "GET /login/ HTTP/1.1" 500 1605 "-" "kube-probe/1.22+"

PS: sentinel & redis authentication is already disabled. Here are the settings for the values.yaml file on bootc/netbox-chart:

# Default values for netbox.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: netboxcommunity/netbox
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

# You can also use an existing secret for the superuser password and API token
# See `existingSecret` for details
superuser:
  name: admin
  email: admin@example.com
  password: admin
  apiToken: 0123456789abcdef0123456789abcdef01234567

# Skip the netbox-docker startup scripts which can pre-populate objects into a
# fresh NetBox installation. By default these do nothing, but they take a while
# to run, so we skip them. See:
# https://github.com/netbox-community/netbox-docker/tree/master/startup_scripts
skipStartupScripts: true

# This is a list of valid fully-qualified domain names (FQDNs) for the NetBox
# server. NetBox will not permit write access to the server via any other
# hostnames. The first FQDN in the list will be treated as the preferred name.
allowedHosts:
  - '*'

# Specify one or more name and email address tuples representing NetBox
# administrators. These people will be notified of application errors (assuming
# correct email settings are provided).
admins: []
  # - ['John Doe', 'jdoe@example.com']

# Permit the retrieval of API tokens after their creation.
allowTokenRetrieval: false

# This parameter acts as a pass-through for configuring Django's built-in
# password validators for local user accounts. If configured, these will be
# applied whenever a user's password is updated to ensure that it meets minimum
# criteria such as length or complexity.
# https://docs.netbox.dev/en/stable/configuration/optional-settings/#auth_password_validators
authPasswordValidators: []

# URL schemes that are allowed within links in NetBox
allowedUrlSchemes: [file, ftp, ftps, http, https, irc, mailto, sftp, ssh, tel,
                    telnet, tftp, vnc, xmpp]

banner:
  # Optionally display a persistent banner at the top and/or bottom of every
  # page. HTML is allowed.
  top: ''
  bottom: ''

  # Text to include on the login page above the login form. HTML is allowed.
  login: ''

# Base URL path if accessing NetBox within a directory. For example, if
# installed at http://example.com/netbox/, set to 'netbox/'. If using
# Kubernetes Ingress, make sure you set ingress.hosts[].paths[] appropriately.
basePath: ''

# Maximum number of days to retain logged changes. Set to 0 to retain change
# logs indefinitely. (Default: 90)
changelogRetention: 90

# This is a mapping of models to custom validators that have been defined
# locally to enforce custom validation logic.
# https://docs.netbox.dev/en/stable/configuration/dynamic-settings/#custom_validators
customValidators: {}

# This is a dictionary defining the default preferences to be set for newly-
# created user accounts.
# https://docs.netbox.dev/en/stable/configuration/dynamic-settings/#default_user_preferences
defaultUserPreferences: {}
  # pagination:
  #   per_page: 100

# API Cross-Origin Resource Sharing (CORS) settings. If originAllowAll
# is set to true, all origins will be allowed. Otherwise, define a list of
# allowed origins using either originWhitelist or originRegexWhitelist. For
# more information, see https://github.com/ottoyiu/django-cors-headers
cors:
  originAllowAll: false
  originWhitelist: []
  originRegexWhitelist: []
  #  - '^(https?://)?(\w+\.)?example\.com$'

# CSRF settings.  Needed for netbox v3.2.0 and newer. For more information
# see https://docs.netbox.dev/en/stable/configuration/optional-settings/#csrf_trusted_origins
csrf:
  # The name of the cookie to use for the cross-site request forgery (CSRF)
  # authentication token.
  cookieName: csrftoken
  # Defines a list of trusted origins for unsafe (e.g. POST) requests. This is
  # a pass-through to Django's CSRF_TRUSTED_ORIGINS setting. Note that each
  # host listed must specify a scheme (e.g. http:// or `https://).
  trustedOrigins: []

# Note: this is where the CUSTOM_VALIDATORS setting naturally fits in relation
# to the upstream NetBox configuration, but the setting cannot be reflected in
# YAML/JSON as it depends on creating instances of Python classes.

# Set the default preferred language/locale
defaultLanguage: en-us

# Set to True to enable server debugging. WARNING: Debugging introduces a
# substantial performance penalty and may reveal sensitive information about
# your installation. Only enable debugging while performing testing. Never
# enable debugging on a production system.
debug: false

# Display full traceback of errors that occur when applying database
# migrations.
dbWaitDebug: false

# Email settings
email:
  server: localhost
  port: 25
  username: ''
  password: ''
  useSSL: false
  useTLS: false
  sslCertFile: ''
  sslKeyFile: ''
  timeout: 10  # seconds
  from: ''

# Enforcement of unique IP space can be toggled on a per-VRF basis. To enforce
# unique IP space within the global table (all prefixes and IP addresses not
# assigned to a VRF), set enforceGlobalUnique to True.
enforceGlobalUnique: false

# Exempt certain models from the enforcement of view permissions. Models listed
# here will be viewable by all users and by anonymous users. List models in the
# form `<app>.<model>`. Add '*' to this list to exempt all models.
exemptViewPermissions: []
#  - dcim.site
#  - dcim.region
#  - ipam.prefix

# Some static choice fields on models can be configured with custom values.
# Each choice in the list must have a database value and a human-friendly
# label, and may optionally specify a color.
# https://docs.netbox.dev/en/stable/configuration/optional-settings/#field_choices
fieldChoices: {}
  # 'dcim.Site.status':
  #   - [foo, Foo, red]
  #   - [bar, Bar, green]
  #   - [baz, Baz, blue]
  # 'dcim.Site.status+':
  #   ...

# Enable the GraphQL API
graphQlEnabled: true

# HTTP proxies NetBox should use when sending outbound HTTP requests (e.g. for
# webhooks).
httpProxies: null
  # http: http://10.10.1.10:3128
  # https: http://10.10.1.10:1080

# IP addresses recognized as internal to the system. The debugging toolbar will
# be available only to clients accessing NetBox from an internal IP.
internalIPs: ['127.0.0.1', '::1']

# The number of days to retain job results (scripts and reports). Set this to 0
# to retain job results in the database indefinitely.
# https://docs.netbox.dev/en/stable/configuration/dynamic-settings/#jobresult_retention
jobResultRetention: 90

# Enable custom logging. Please see the Django documentation for detailed
# guidance on configuring custom logs:
# https://docs.djangoproject.com/en/1.11/topics/logging/
logging: {}

# Automatically reset the lifetime of a valid session upon each authenticated
# request. Enables users to remain authenticated to NetBox indefinitely.
loginPersistence: false

# Setting this to True will permit only authenticated users to access any part
# of NetBox. By default, anonymous users are permitted to access most data in
# NetBox but not make any changes.
loginRequired: false

# The length of time (in seconds) for which a user will remain logged into the
# web UI before being prompted to re-authenticate.
loginTimeout: 1209600  # 14 days

# The view name or URL to which users are redirected after logging out.
logoutRedirectUrl: home

# Setting this to True will display a "maintenance mode" banner at the top of
# every page.
maintenanceMode: false

# The URL to use when mapping physical addresses or GPS coordinates
mapsUrl: 'https://maps.google.com/?q='

# An API consumer can request an arbitrary number of objects by appending the
# "limit" parameter to the URL (e.g. "?limit=1000"). This setting defines the
# maximum limit. Setting it to 0 or None will allow an API consumer to request
# all objects by specifying "?limit=0".
maxPageSize: 1000

# By default uploaded media is stored in an attached volume. Using
# Django-storages is also supported. Provide the class path of the storage
# driver in storageBackend and any configuration options in storageConfig.
storageBackend: null  # storages.backends.s3boto3.S3Boto3Storage
storageConfig: {}
  # AWS_ACCESS_KEY_ID: 'Key ID'
  # AWS_SECRET_ACCESS_KEY: 'Secret'
  # AWS_STORAGE_BUCKET_NAME: 'netbox'
  # AWS_S3_ENDPOINT_URL: 'endpoint URL of S3 provider'
  # AWS_S3_REGION_NAME: 'eu-west-1'

# Expose Prometheus monitoring metrics at the HTTP endpoint '/metrics'
metricsEnabled: false

napalm:
  # Credentials that NetBox will use to access live devices.
  username: ''
  password: ''

  # NAPALM timeout (in seconds). (Default: 30)
  timeout: 30

  # NAPALM optional arguments (see
  # http://napalm.readthedocs.io/en/latest/support/#optional-arguments).
  # Arguments must be provided as a dictionary.
  args: {}

# Determine how many objects to display per page within a list. (Default: 50)
paginateCount: 50

# Enable installed plugins. Add the name of each plugin to the list.
plugins: []

# Plugins configuration settings. These settings are used by various plugins
# that the user may have installed. Each key in the dictionary is the name of
# an installed plugin and its value is a dictionary of settings.
pluginsConfig: {}

# The default value for the amperage field when creating new power feeds.
# https://docs.netbox.dev/en/stable/configuration/dynamic-settings/#powerfeed_default_amperage
powerFeedDefaultAmperage: 15

# The default value (percentage) for the max_utilization field when creating
# new power feeds.
# https://docs.netbox.dev/en/stable/configuration/dynamic-settings/#powerfeed_default_max_utilization
powerFeedMaxUtilisation: 80

# The default value for the voltage field when creating new power feeds.
# https://docs.netbox.dev/en/stable/configuration/dynamic-settings/#powerfeed_default_voltage
powerFeedDefaultVoltage: 120

# When determining the primary IP address for a device, IPv6 is preferred over
# IPv4 by default. Set this to True to prefer IPv4 instead.
preferIPv4: false

# Rack elevation size defaults, in pixels. For best results, the ratio of width
# to height should be roughly 10:1.
rackElevationDefaultUnitHeight: 22
rackElevationDefaultUnitWidth: 220

# Remote authentication support
remoteAuth:
  enabled: true
  backend: 'social_core.backends.azuread.AzureADOAuth2' #netbox.authentication.RemoteUserBackend
  header: HTTP_REMOTE_USER
  autoCreateUser: true
  defaultGroups: []
  defaultPermissions: {}
  groupSyncEnabled: false
  groupHeader: HTTP_REMOTE_USER_GROUP
  superuserGroups: []
  superusers: []
  staffGroups: []
  staffUsers: []
  groupSeparator: '|'

  # The following options are specific for backend "netbox.authentication.LDAPBackend"
  # you can use an existing netbox secret with "ldap_bind_password" instead of "bindPassword"
  # see https://django-auth-ldap.readthedocs.io
  #
  # When enabling LDAP support please see "Using LDAP Authentication" in README.md and
  # uncomment ALL of the configuration settings below, or your configuration will be invalid.
  #
  # ldap:
  #   serverUri: 'ldap://domain.com'
  #   startTls: true
  #   ignoreCertErrors: false
  #   bindDn: 'CN=Netbox,OU=EmbeddedDevices,OU=MyCompany,DC=domain,dc=com'
  #   bindPassword: 'TopSecretPassword'
  #   userDnTemplate: null
  #   userSearchBaseDn: 'OU=Users,OU=MyCompany,DC=domain,dc=com'
  #   userSearchAttr: 'sAMAccountName'
  #   groupSearchBaseDn: 'OU=Groups,OU=MyCompany,DC=domain,dc=com'
  #   groupSearchClass: 'group'
  #   groupType: 'GroupOfNamesType'
  #   requireGroupDn: ''
  #   findGroupPerms: true
  #   mirrorGroups: true
  #   mirrorGroupsExcept: null
  #   cacheTimeout: 3600
  #   isAdminDn: 'CN=Network Configuration Operators,CN=Builtin,DC=domain,dc=com'
  #   isSuperUserDn: 'CN=Domain Admins,CN=Users,DC=domain,dc=com'
  #   attrFirstName: 'givenName'
  #   attrLastName: 'sn'
  #   attrMail: 'mail'

releaseCheck:
  # This repository is used to check whether there is a new release of NetBox
  # available. Set to null to disable the version check or use the URL below to
  # check for release in the official NetBox repository.
  url: null
  # url: https://api.github.com/repos/netbox-community/netbox/releases

# Maximum execution time for background tasks, in seconds.
rqDefaultTimeout: 300  # 5 mins

# The name to use for the session cookie.
sessionCookieName: sessionid

# Localization
enableLocalization: false

# Time zone (default: UTC)
timeZone: UTC

# Date/time formatting. See the following link for supported formats:
# https://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
dateFormat: 'N j, Y'
shortDateFormat: 'Y-m-d'
timeFormat: 'g:i a'
shortTimeFormat: 'H:i:s'
dateTimeFormat: 'N j, Y g:i a'
shortDateTimeFormat: 'Y-m-d H:i'

## Extra configuration settings
# You can pass additional YAML files to be loaded into NetBox's configuration.
# These can be passed as arbitrary configuration values set in the chart, or
# you can load arbitrary *.yaml keys from ConfigMaps and Secrets.
extraConfig: []
  # - values:
  #     SOCIAL_AUTH_AZUREAD_OAUTH2_KEY: 'OAUTH2_KEY'
  #     SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET: 'OAUTH2_SECRET'
  #     EXTRA_SETTING_ONE: example
  #     ANOTHER_SETTING: foobar
  # - configMap: # pod.spec.volumes.configMap
  #     name: netbox-extra
  #     items: []
  #     optional: false
  # - secret: # same as pod.spec.volumes.secret
  #     secretName: netbox-extra
  #     items: []
  #     optional: false

# If provided, this should be a 50+ character string of random characters. It
# will be randomly generated if left blank.
# You can also use an existing secret with "secret_key" instead of "secretKey"
# See `existingSecret` for details
secretKey: ""

## Provide passwords using existing secret
# If set, this Secret must contain the following keys:
# - db_password: database password (if postgresql.enabled is false and
#     externalDatabase.existingSecretName is blank)
# - email_password: SMTP user password
# - ldap_bind_password: Password for LDAP bind DN
# - napalm_password: NAPALM user password
# - redis_tasks_password: Redis password for tasks Redis instance (if
#     redis.enabled is false and tasksRedis.existingSecretName is blank)
# - redis_cache_password: Redis password for caching Redis instance (if
#     redis.enabled is false and cachingRedis.existingSecretName is blank)
# - secret_key: session encryption token (50+ random characters)
# - superuser_password: Password for the initial super-user account
# - superuser_api_token: API token created for the initial super-user account
existingSecret: ""

postgresql:
  ## Deploy PostgreSQL using bundled chart
  # To use an external database, set this to false and configure the settings
  # under externalDatabase
  enabled: false

  auth:
    username: netbox
    database: netbox

## External database settings
# These are used if postgresql.enabled is false, and are ignored otherwise
externalDatabase:
  host: postgresql
  port: 5432
  database: netbox
  username: netbox
  password: ""
  existingSecretName: ""
  existingSecretKey: postgresql-password

  # The following settings also apply when using the bundled PostgreSQL chart:
  sslMode: prefer
  connMaxAge: 300
  disableServerSideCursors: false
  targetSessionAttrs: read-write

redis:
  ## Deploy Redis using bundled chart
  # To use an external Redis instance, set this to false and configure the
  # settings under *both* tasksRedis *and* cachingRedis
  enabled: false

tasksRedis:
  database: 0
  ssl: false
  insecureSkipTlsVerify: false
  caCertPath: ""

  # Used only when redis.enabled is false. host and port are not used if
  # sentinels are given.
  host: redis
  port: 6379
  sentinels: #[]
    - redis:26379
  #  - mysentinel:26379
  sentinelService: redis
  sentinelTimeout: 300
  username: ""
  password: ""
  existingSecretName: ""
  existingSecretKey: redis-password

cachingRedis:
  database: 1
  ssl: false
  insecureSkipTlsVerify: false
  caCertPath: ""

  # Used only when redis.enabled is false. host and port are not used if
  # sentinels are given.
  host: redis
  port: 6379
  sentinels: #[]
    - redis:26379
  #  - mysentinel:26379
  sentinelService: redis
  sentinelTimeout: 300
  username: ""
  password: ""
  existingSecretName: ""
  existingSecretKey: redis-password

imagePullSecrets: #[]
  - name: regcred
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

## Storage configuration for media
persistence:
  enabled: false
  ##
  ## Existing claim to use
  existingClaim: ""
  ## Existing claim's subPath to use, e.g. "media" (optional)
  subPath: ""
  ##
  ## Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  storageClass: ""
  ## Persistent Volume Selector
  ## if enabled, define any Selectors for choosing existing Persistent Volumes here
  # selector:
  #   matchLabel:
  #     netbox-storage: media
  accessMode: ReadWriteOnce
  ##
  ## Persistant storage size request
  size: 1Gi

## Storage configuration for reports
reportsPersistence:
  enabled: false
  ##
  ## Existing claim to use
  existingClaim: ""
  ## Existing claim's subPath to use, e.g. "media" (optional)
  subPath: ""
  ##
  ## Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  storageClass: ""
  ## Persistent Volume Selector
  ## if enabled, define any Selectors for choosing existing Persistent Volumes here
  # selector:
  #   matchLabel:
  #     netbox-storage: reports
  accessMode: ReadWriteOnce
  ##
  ## Persistant storage size request
  size: 1Gi

commonLabels: {}

commonAnnotations: {}

podAnnotations: #{}
  sidecar.istio.io/proxyCPULimit: "900m"
  sidecar.istio.io/proxyMemoryLimit: "1024Mi"
  # sidecar.istio.io/proxyCPU: "500m"
  # sidecar.istio.io/proxyMemory: "512Mi"

podLabels: {}

podSecurityContext:
  fsGroup: 1000
  runAsNonRoot: true
  seccompProfile:
    type: RuntimeDefault
  # runAsUser: 1000
  # runAsGroup: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  allowPrivilegeEscalation: false
  readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000
  runAsGroup: 1000
  seccompProfile:
    type: RuntimeDefault

service:
  annotations: {}
    # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <acm_cert_arn>
    # service.beta.kubernetes.io/aws-load-balancer-ssl-ports: http
  type: ClusterIP
  port: 80
  nodePort: ""
  clusterIP: ""
  clusterIPs: []
  externalIPs: []
  externalTrafficPolicy: ""
  ipFamilyPolicy: ""
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  # - 10.0.0.0/8

ingress:
  enabled: false
  className: ""
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        # You can manually specify the service name and service port if
        # required. This could be useful if for exemple you are using the AWS
        # ALB Ingress Controller and want to set up automatic SSL redirect.
        # https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/tasks/ssl_redirect/#redirect-traffic-from-http-to-https
        # - path: /*
        #   backend:
        #     serviceName: ssl-redirect
        #     servicePort: use-annotation
        #
        # Or you can let the template set it for you.
        # Both types of rule can be combined.
        # NB: You may also want to set the basePath above
        - /

  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: #{}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi
  limits:
    cpu: "600m"
    memory: "1024Mi"

topologySpreadConstraints: []
  #  - maxSkew: 1
  #    topologyKey: topology.kubernetes.io/zone
  #    whenUnsatisfiable: DoNotSchedule
  #    labelSelector:
  #      matchLabels:
  #        "app.kubernetes.io/component": netbox
  #        "app.kubernetes.io/name": netbox

readinessProbe:
  enabled: true
  initialDelaySeconds: 0
  timeoutSeconds: 1
  periodSeconds: 10
  successThreshold: 1

init:
  image:
    repository: busybox
    tag: 1.32.1
    pullPolicy: IfNotPresent

  resources: #{}
    limits:
      cpu: "600m"
      memory: "1024Mi"
    requests: #{}
      cpu: "300m"
      memory: "512Mi"

  securityContext:
    capabilities:
      drop:
        - ALL
    allowPrivilegeEscalation: false
    readOnlyRootFilesystem: true
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000  # Keep the same as securityContext.runAsGroup

test:
  image:
    repository: busybox
    tag: 1.32.1
    pullPolicy: IfNotPresent

  resources: #{}
    limits:
      cpu: "600m"
      memory: "1024Mi"
    requests: #{}
      cpu: "300m"
      memory: "512Mi"

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

hostAliases: []

updateStrategy: {}
  # type: RollingUpdate

affinity: #{}
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/instance: netbox
            app.kubernetes.io/name: netbox
            app.kubernetes.io/component: netbox
        topologyKey: kubernetes.io/hostname

## Additional environment variables to set
extraEnvs: []
#  - name: FOO
#    valueFrom:
#      secretKeyRef:
#        key: FOO
#        name: secret-resource

## Additional volumes to mount
extraVolumeMounts: []
#  - name: extra-volume
#    mountPath: /run/secrets/super-secret
#    readOnly: true

extraVolumes: []
#  - name: extra-volume
#    secret:
#      secretName: super-secret

## Additional containers to be added to the NetBox pod.
extraContainers: []
#  - name: my-sidecar
#    image: nginx:latest

## Containers which are run before the NetBox containers are started.
extraInitContainers: []
#  - name: init-myservice
#    image: busybox
#    command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']

serviceMonitor:
  enabled: false
  additionalLabels: {}
  interval: 1m
  scrapeTimeout: 10s

# Configuration of Cron settings
housekeeping:
  enabled: true
  concurrencyPolicy: Forbid
  failedJobsHistoryLimit: 5
  restartPolicy: OnFailure
  schedule: '0 0 * * *'
  successfulJobsHistoryLimit: 5
  suspend: false

  podAnnotations: #{}
    sidecar.istio.io/proxyCPULimit: "500m"
    sidecar.istio.io/proxyMemoryLimit: "1024Mi"
    sidecar.istio.io/proxyCPU: "500m"
    sidecar.istio.io/proxyMemory: "512Mi"

  podLabels: {}

  podSecurityContext:
    fsGroup: 1000
    runAsNonRoot: true
    seccompProfile:
      type: RuntimeDefault
    # runAsUser: 1000
    # runAsGroup: 1000

  securityContext:
    capabilities:
      drop:
        - ALL
    allowPrivilegeEscalation: false
    readOnlyRootFilesystem: true
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    seccompProfile:
      type: RuntimeDefault

  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi
  resources: #{}
    limits:
      cpu: "600m"
      memory: "1024Mi"
    requests: #{}
      cpu: "300m"
      memory: "512Mi"

  nodeSelector: {}

  tolerations: []

  affinity: {}

  ## Additional environment variables to set
  extraEnvs: []
  #  - name: FOO
  #    valueFrom:
  #      secretKeyRef:
  #        key: FOO
  #        name: secret-resource

  ## Additional volumes to mount
  extraVolumeMounts: []
  #  - name: extra-volume
  #    mountPath: /run/secrets/super-secret
  #    readOnly: true

  extraVolumes: []
  #  - name: extra-volume
  #    secret:
  #      secretName: super-secret

  ## Additional containers to be added to the NetBox pod.
  extraContainers: []
  #  - name: my-sidecar
  #    image: nginx:latest

  ## Containers which are run before the NetBox containers are started.
  extraInitContainers: []
  #  - name: init-myservice
  #    image: busybox
  #    command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']

# Worker for Netbox
# Only required for Netbox Jobs, e.g. Webhooks
worker:
  enabled: true

  replicaCount: 1

  podAnnotations: #{}
    sidecar.istio.io/proxyCPULimit: "900m"
    sidecar.istio.io/proxyMemoryLimit: "1024Mi"
    # sidecar.istio.io/proxyCPU: "500m"
    # sidecar.istio.io/proxyMemory: "512Mi"

  podLabels: {}

  podSecurityContext:
    fsGroup: 1000
    runAsNonRoot: true
    seccompProfile:
      type: RuntimeDefault
    # runAsUser: 1000
    # runAsGroup: 1000

  securityContext:
    capabilities:
      drop:
        - ALL
    allowPrivilegeEscalation: false
    readOnlyRootFilesystem: true
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    seccompProfile:
      type: RuntimeDefault

  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi
  resources: #{}
    limits:
      cpu: "600m"
      memory: "1024Mi"
    requests: {}
      # cpu: "300m"
      # memory: "512Mi"

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80

  nodeSelector: {}

  tolerations: []

  hostAliases: []

  updateStrategy: {}
    # type: RollingUpdate

  affinity: {}

  ## Additional environment variables to set
  extraEnvs: []
  #  - name: FOO
  #    valueFrom:
  #      secretKeyRef:
  #        key: FOO
  #        name: secret-resource

  ## Additional volumes to mount
  extraVolumeMounts: []
  #  - name: extra-volume
  #    mountPath: /run/secrets/super-secret
  #    readOnly: true

  extraVolumes: []
  #  - name: extra-volume
  #    secret:
  #      secretName: super-secret

  ## Additional containers to be added to the NetBox pod.
  extraContainers: []
  #  - name: my-sidecar
  #    image: nginx:latest

  ## Containers which are run before the NetBox containers are started.
  extraInitContainers: []
  #  - name: init-myservice
  #    image: busybox
  #    command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']

bitnami/redis 's current values.yaml file:

## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
##

## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
## @param global.redis.password Global Redis&reg; password (overrides `auth.password`)
##
global:
  imageRegistry: ""
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  imagePullSecrets: []
  storageClass: ""
  redis:
    password: ""

## @section Common parameters
##

## @param kubeVersion Override Kubernetes version
##
kubeVersion: ""
## @param nameOverride String to partially override common.names.fullname
##
nameOverride: ""
## @param fullnameOverride String to fully override common.names.fullname
##
fullnameOverride: ""
## @param commonLabels Labels to add to all deployed objects
##
commonLabels: {}
## @param commonAnnotations Annotations to add to all deployed objects
##
commonAnnotations: {}
## @param secretAnnotations Annotations to add to secret
##
secretAnnotations: {}
## @param clusterDomain Kubernetes cluster domain name
##
clusterDomain: cluster.local
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy: []
## @param useHostnames Use hostnames internally when announcing replication
###
useHostnames: true

## Enable diagnostic mode in the deployment
##
diagnosticMode:
  ## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
  ##
  enabled: false
  ## @param diagnosticMode.command Command to override all containers in the deployment
  ##
  command:
    - sleep
  ## @param diagnosticMode.args Args to override all containers in the deployment
  ##
  args:
    - infinity

## @section Redis&reg; Image parameters
##

## Bitnami Redis&reg; image
## ref: https://hub.docker.com/r/bitnami/redis/tags/
## @param image.registry Redis&reg; image registry
## @param image.repository Redis&reg; image repository
## @param image.tag Redis&reg; image tag (immutable tags are recommended)
## @param image.digest Redis&reg; image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param image.pullPolicy Redis&reg; image pull policy
## @param image.pullSecrets Redis&reg; image pull secrets
## @param image.debug Enable image debug mode
##
image:
  registry: docker.io
  repository: bitnami/redis
  tag: 7.0.8-debian-11-r12
  digest: ""
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ## e.g:
  ## pullSecrets:
  ##   - myRegistryKeySecretName
  ##
  pullSecrets: []
  ## Enable debug mode
  ##
  debug: false

## @section Redis&reg; common configuration parameters
## https://github.com/bitnami/containers/tree/main/bitnami/redis#configuration
##

## @param architecture Redis&reg; architecture. Allowed values: `standalone` or `replication`
##
architecture: replication
## Redis&reg; Authentication parameters
## ref: https://github.com/bitnami/containers/tree/main/bitnami/redis#setting-the-server-password-on-first-run
##
auth:
  ## @param auth.enabled Enable password authentication
  ##
  enabled: false
  ## @param auth.sentinel Enable password authentication on sentinels too
  ##
  sentinel: false
  ## @param auth.password Redis&reg; password
  ## Defaults to a random 10-character alphanumeric string if not set
  ##
  password: ""
  ## @param auth.existingSecret The name of an existing secret with Redis&reg; credentials
  ## NOTE: When it's set, the previous `auth.password` parameter is ignored
  ##
  existingSecret: ""
  ## @param auth.existingSecretPasswordKey Password key to be retrieved from existing secret
  ## NOTE: ignored unless `auth.existingSecret` parameter is set
  ##
  existingSecretPasswordKey: ""
  ## @param auth.usePasswordFiles Mount credentials as files instead of using an environment variable
  ##
  usePasswordFiles: false

## @param commonConfiguration [string] Common configuration to be added into the ConfigMap
## ref: https://redis.io/topics/config
##
commonConfiguration: |-
  # Enable AOF https://redis.io/topics/persistence#append-only-file
  appendonly yes
  # Disable RDB persistence, AOF persistence already enabled.
  save ""
## @param existingConfigmap The name of an existing ConfigMap with your custom configuration for Redis&reg; nodes
##
existingConfigmap: ""

## @section Redis&reg; master configuration parameters
##

master:
  ## @param master.count Number of Redis&reg; master instances to deploy (experimental, requires additional configuration)
  ##
  count: 1
  ## @param master.configuration Configuration for Redis&reg; master nodes
  ## ref: https://redis.io/topics/config
  ##
  configuration: ""
  ## @param master.disableCommands Array with Redis&reg; commands to disable on master nodes
  ## Commands will be completely disabled by renaming each to an empty string.
  ## ref: https://redis.io/topics/security#disabling-of-specific-commands
  ##
  disableCommands:
    - FLUSHDB
    - FLUSHALL
  ## @param master.command Override default container command (useful when using custom images)
  ##
  command: []
  ## @param master.args Override default container args (useful when using custom images)
  ##
  args: []
  ## @param master.preExecCmds Additional commands to run prior to starting Redis&reg; master
  ##
  preExecCmds: []
  ## @param master.extraFlags Array with additional command line flags for Redis&reg; master
  ## e.g:
  ## extraFlags:
  ##  - "--maxmemory-policy volatile-ttl"
  ##  - "--repl-backlog-size 1024mb"
  ##
  extraFlags: []
  ## @param master.extraEnvVars Array with extra environment variables to add to Redis&reg; master nodes
  ## e.g:
  ## extraEnvVars:
  ##   - name: FOO
  ##     value: "bar"
  ##
  extraEnvVars: []
  ## @param master.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for Redis&reg; master nodes
  ##
  extraEnvVarsCM: ""
  ## @param master.extraEnvVarsSecret Name of existing Secret containing extra env vars for Redis&reg; master nodes
  ##
  extraEnvVarsSecret: ""
  ## @param master.containerPorts.redis Container port to open on Redis&reg; master nodes
  ##
  containerPorts:
    redis: 6379
  ## Configure extra options for Redis&reg; containers' liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
  ## @param master.startupProbe.enabled Enable startupProbe on Redis&reg; master nodes
  ## @param master.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
  ## @param master.startupProbe.periodSeconds Period seconds for startupProbe
  ## @param master.startupProbe.timeoutSeconds Timeout seconds for startupProbe
  ## @param master.startupProbe.failureThreshold Failure threshold for startupProbe
  ## @param master.startupProbe.successThreshold Success threshold for startupProbe
  ##
  startupProbe:
    enabled: false
    initialDelaySeconds: 20
    periodSeconds: 5
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
  ## @param master.livenessProbe.enabled Enable livenessProbe on Redis&reg; master nodes
  ## @param master.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
  ## @param master.livenessProbe.periodSeconds Period seconds for livenessProbe
  ## @param master.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
  ## @param master.livenessProbe.failureThreshold Failure threshold for livenessProbe
  ## @param master.livenessProbe.successThreshold Success threshold for livenessProbe
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 20
    periodSeconds: 5
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
  ## @param master.readinessProbe.enabled Enable readinessProbe on Redis&reg; master nodes
  ## @param master.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
  ## @param master.readinessProbe.periodSeconds Period seconds for readinessProbe
  ## @param master.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
  ## @param master.readinessProbe.failureThreshold Failure threshold for readinessProbe
  ## @param master.readinessProbe.successThreshold Success threshold for readinessProbe
  ##
  readinessProbe:
    enabled: true
    initialDelaySeconds: 20
    periodSeconds: 5
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 5
  ## @param master.customStartupProbe Custom startupProbe that overrides the default one
  ##
  customStartupProbe: {}
  ## @param master.customLivenessProbe Custom livenessProbe that overrides the default one
  ##
  customLivenessProbe: {}
  ## @param master.customReadinessProbe Custom readinessProbe that overrides the default one
  ##
  customReadinessProbe: {}
  ## Redis&reg; master resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## @param master.resources.limits The resources limits for the Redis&reg; master containers
  ## @param master.resources.requests The requested resources for the Redis&reg; master containers
  ##
  resources:
    limits: #{}
      cpu: "500m"
      memory: "1024Mi"
    requests: {}
  ## Configure Pods Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
  ## @param master.podSecurityContext.enabled Enabled Redis&reg; master pods' Security Context
  ## @param master.podSecurityContext.fsGroup Set Redis&reg; master pod's Security Context fsGroup
  ##
  podSecurityContext:
    enabled: true
    runAsGroup: 1000
    runAsUser: 1000
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault
  ## Configure Container Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
  ## @param master.containerSecurityContext.enabled Enabled Redis&reg; master containers' Security Context
  ## @param master.containerSecurityContext.runAsUser Set Redis&reg; master containers' Security Context runAsUser
  ##
  containerSecurityContext:
    enabled: true
    runAsGroup: 1000
    runAsUser: 1000
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false
    seccompProfile:
      type: RuntimeDefault
  ## @param master.kind Use either Deployment or StatefulSet (default)
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
  ##
  kind: StatefulSet
  ## @param master.schedulerName Alternate scheduler for Redis&reg; master pods
  ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
  ##
  schedulerName: ""
  ## @param master.updateStrategy.type Redis&reg; master statefulset strategy type
  ## @skip master.updateStrategy.rollingUpdate
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
  ##
  updateStrategy:
    ## StrategyType
    ## Can be set to RollingUpdate, OnDelete (statefulset), Recreate (deployment)
    ##
    type: RollingUpdate
  ## @param master.minReadySeconds How many seconds a pod needs to be ready before killing the next, during update
  ##
  minReadySeconds: 0
  ## @param master.priorityClassName Redis&reg; master pods' priorityClassName
  ##
  priorityClassName: ""
  ## @param master.hostAliases Redis&reg; master pods host aliases
  ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
  ##
  hostAliases: []
  ## @param master.podLabels Extra labels for Redis&reg; master pods
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels: {}
  ## @param master.podAnnotations Annotations for Redis&reg; master pods
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations: #{}
    sidecar.istio.io/proxyCPULimit: "900m"
    sidecar.istio.io/proxyMemoryLimit: "1024Mi"
    # sidecar.istio.io/proxyCPU: "450m"
    # sidecar.istio.io/proxyMemory: "512Mi"
  ## @param master.shareProcessNamespace Share a single process namespace between all of the containers in Redis&reg; master pods
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/
  ##
  shareProcessNamespace: false
  ## @param master.podAffinityPreset Pod affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard`
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
  ##
  podAffinityPreset: ""
  ## @param master.podAntiAffinityPreset Pod anti-affinity preset. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard`
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
  ##
  podAntiAffinityPreset: soft
  ## Node master.affinity preset
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  ##
  nodeAffinityPreset:
    ## @param master.nodeAffinityPreset.type Node affinity preset type. Ignored if `master.affinity` is set. Allowed values: `soft` or `hard`
    ##
    type: ""
    ## @param master.nodeAffinityPreset.key Node label key to match. Ignored if `master.affinity` is set
    ##
    key: ""
    ## @param master.nodeAffinityPreset.values Node label values to match. Ignored if `master.affinity` is set
    ## E.g.
    ## values:
    ##   - e2e-az1
    ##   - e2e-az2
    ##
    values: []
  ## @param master.affinity Affinity for Redis&reg; master pods assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## NOTE: `master.podAffinityPreset`, `master.podAntiAffinityPreset`, and `master.nodeAffinityPreset` will be ignored when it's set
  ##
  affinity: {}
  ## @param master.nodeSelector Node labels for Redis&reg; master pods assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## @param master.tolerations Tolerations for Redis&reg; master pods assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []
  ## @param master.topologySpreadConstraints Spread Constraints for Redis&reg; master pod assignment
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
  ## E.g.
  ## topologySpreadConstraints:
  ##   - maxSkew: 1
  ##     topologyKey: node
  ##     whenUnsatisfiable: DoNotSchedule
  ##
  topologySpreadConstraints: []
  ## @param master.dnsPolicy DNS Policy for Redis&reg; master pod
  ## ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
  ## E.g.
  ## dnsPolicy: ClusterFirst
  dnsPolicy: ""
  ## @param master.dnsConfig DNS Configuration for Redis&reg; master pod
  ## ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
  ## E.g.
  ## dnsConfig:
  ##   options:
  ##   - name: ndots
  ##     value: "4"
  ##   - name: single-request-reopen
  dnsConfig: {}
  ## @param master.lifecycleHooks for the Redis&reg; master container(s) to automate configuration before or after startup
  ##
  lifecycleHooks: {}
  ## @param master.extraVolumes Optionally specify extra list of additional volumes for the Redis&reg; master pod(s)
  ##
  extraVolumes: []
  ## @param master.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Redis&reg; master container(s)
  ##
  extraVolumeMounts: []
  ## @param master.sidecars Add additional sidecar containers to the Redis&reg; master pod(s)
  ## e.g:
  ## sidecars:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
  ##         containerPort: 1234
  ##
  sidecars: []
  ## @param master.initContainers Add additional init containers to the Redis&reg; master pod(s)
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
  ## e.g:
  ## initContainers:
  ##  - name: your-image-name
  ##    image: your-image
  ##    imagePullPolicy: Always
  ##    command: ['sh', '-c', 'echo "hello world"']
  ##
  initContainers: []
  ## Persistence parameters
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param master.persistence.enabled Enable persistence on Redis&reg; master nodes using Persistent Volume Claims
    ##
    enabled: true
    ## @param master.persistence.medium Provide a medium for `emptyDir` volumes.
    ##
    medium: ""
    ## @param master.persistence.sizeLimit Set this to enable a size limit for `emptyDir` volumes.
    ##
    sizeLimit: ""
    ## @param master.persistence.path The path the volume will be mounted at on Redis&reg; master containers
    ## NOTE: Useful when using different Redis&reg; images
    ##
    path: /data
    ## @param master.persistence.subPath The subdirectory of the volume to mount on Redis&reg; master containers
    ## NOTE: Useful in dev environments
    ##
    subPath: ""
    ## @param master.persistence.subPathExpr Used to construct the subPath subdirectory of the volume to mount on Redis&reg; master containers
    ##
    subPathExpr: ""
    ## @param master.persistence.storageClass Persistent Volume storage class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner
    ##
    storageClass: ""
    ## @param master.persistence.accessModes Persistent Volume access modes
    ##
    accessModes:
      - ReadWriteOnce
    ## @param master.persistence.size Persistent Volume size
    ##
    size: 8Gi
    ## @param master.persistence.annotations Additional custom annotations for the PVC
    ##
    annotations: {}
    ## @param master.persistence.selector Additional labels to match for the PVC
    ## e.g:
    ## selector:
    ##   matchLabels:
    ##     app: my-app
    ##
    selector: {}
    ## @param master.persistence.dataSource Custom PVC data source
    ##
    dataSource: {}
    ## @param master.persistence.existingClaim Use a existing PVC which must be created manually before bound
    ## NOTE: requires master.persistence.enabled: true
    ##
    existingClaim: ""
  ## Redis&reg; master service parameters
  ##
  service:
    ## @param master.service.type Redis&reg; master service type
    ##
    type: ClusterIP
    ## @param master.service.ports.redis Redis&reg; master service port
    ##
    ports:
      redis: 6379
    ## @param master.service.nodePorts.redis Node port for Redis&reg; master
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
    ## NOTE: choose port between <30000-32767>
    ##
    nodePorts:
      redis: ""
    ## @param master.service.externalTrafficPolicy Redis&reg; master service external traffic policy
    ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    ##
    externalTrafficPolicy: Cluster
    ## @param master.service.extraPorts Extra ports to expose (normally used with the `sidecar` value)
    ##
    extraPorts: []
    ## @param master.service.internalTrafficPolicy Redis&reg; master service internal traffic policy (requires Kubernetes v1.22 or greater to be usable)
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/
    ##
    internalTrafficPolicy: Cluster
    ## @param master.service.clusterIP Redis&reg; master service Cluster IP
    ##
    clusterIP: ""
    ## @param master.service.loadBalancerIP Redis&reg; master service Load Balancer IP
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    loadBalancerIP: ""
    ## @param master.service.loadBalancerSourceRanges Redis&reg; master service Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ## e.g.
    ## loadBalancerSourceRanges:
    ##   - 10.10.10.0/24
    ##
    loadBalancerSourceRanges: []
    ## @param master.service.externalIPs Redis&reg; master service External IPs
    ## https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
    ## e.g.
    ## externalIPs:
    ##   - 10.10.10.1
    ##   - 201.22.30.1
    ##
    externalIPs: []
    ## @param master.service.annotations Additional custom annotations for Redis&reg; master service
    ##
    annotations: {}
    ## @param master.service.sessionAffinity Session Affinity for Kubernetes service, can be "None" or "ClientIP"
    ## If "ClientIP", consecutive client requests will be directed to the same Pod
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
    ##
    sessionAffinity: None
    ## @param master.service.sessionAffinityConfig Additional settings for the sessionAffinity
    ## sessionAffinityConfig:
    ##   clientIP:
    ##     timeoutSeconds: 300
    ##
    sessionAffinityConfig: {}
  ## @param master.terminationGracePeriodSeconds Integer setting the termination grace period for the redis-master pods
  ##
  terminationGracePeriodSeconds: 30
  ## ServiceAccount configuration
  ##
  serviceAccount:
    ## @param master.serviceAccount.create Specifies whether a ServiceAccount should be created
    ##
    create: false
    ## @param master.serviceAccount.name The name of the ServiceAccount to use.
    ## If not set and create is true, a name is generated using the common.names.fullname template
    ##
    name: ""
    ## @param master.serviceAccount.automountServiceAccountToken Whether to auto mount the service account token
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
    ##
    automountServiceAccountToken: false
    ## @param master.serviceAccount.annotations Additional custom annotations for the ServiceAccount
    ##
    annotations: {}

## @section Redis&reg; replicas configuration parameters
##

replica:
  ## @param replica.replicaCount Number of Redis&reg; replicas to deploy
  ##
  replicaCount: 3
  ## @param replica.configuration Configuration for Redis&reg; replicas nodes
  ## ref: https://redis.io/topics/config
  ##
  configuration: ""
  ## @param replica.disableCommands Array with Redis&reg; commands to disable on replicas nodes
  ## Commands will be completely disabled by renaming each to an empty string.
  ## ref: https://redis.io/topics/security#disabling-of-specific-commands
  ##
  disableCommands:
    - FLUSHDB
    - FLUSHALL
  ## @param replica.command Override default container command (useful when using custom images)
  ##
  command: []
  ## @param replica.args Override default container args (useful when using custom images)
  ##
  args: []
  ## @param replica.preExecCmds Additional commands to run prior to starting Redis&reg; replicas
  ##
  preExecCmds: []
  ## @param replica.extraFlags Array with additional command line flags for Redis&reg; replicas
  ## e.g:
  ## extraFlags:
  ##  - "--maxmemory-policy volatile-ttl"
  ##  - "--repl-backlog-size 1024mb"
  ##
  extraFlags: []
  ## @param replica.extraEnvVars Array with extra environment variables to add to Redis&reg; replicas nodes
  ## e.g:
  ## extraEnvVars:
  ##   - name: FOO
  ##     value: "bar"
  ##
  extraEnvVars: []
  ## @param replica.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for Redis&reg; replicas nodes
  ##
  extraEnvVarsCM: ""
  ## @param replica.extraEnvVarsSecret Name of existing Secret containing extra env vars for Redis&reg; replicas nodes
  ##
  extraEnvVarsSecret: ""
  ## @param replica.externalMaster.enabled Use external master for bootstrapping
  ## @param replica.externalMaster.host External master host to bootstrap from
  ## @param replica.externalMaster.port Port for Redis service external master host
  ##
  externalMaster:
    enabled: false
    host: ""
    port: 6379
  ## @param replica.containerPorts.redis Container port to open on Redis&reg; replicas nodes
  ##
  containerPorts:
    redis: 6379
  ## Configure extra options for Redis&reg; containers' liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
  ## @param replica.startupProbe.enabled Enable startupProbe on Redis&reg; replicas nodes
  ## @param replica.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
  ## @param replica.startupProbe.periodSeconds Period seconds for startupProbe
  ## @param replica.startupProbe.timeoutSeconds Timeout seconds for startupProbe
  ## @param replica.startupProbe.failureThreshold Failure threshold for startupProbe
  ## @param replica.startupProbe.successThreshold Success threshold for startupProbe
  ##
  startupProbe:
    enabled: true
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 22
  ## @param replica.livenessProbe.enabled Enable livenessProbe on Redis&reg; replicas nodes
  ## @param replica.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
  ## @param replica.livenessProbe.periodSeconds Period seconds for livenessProbe
  ## @param replica.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
  ## @param replica.livenessProbe.failureThreshold Failure threshold for livenessProbe
  ## @param replica.livenessProbe.successThreshold Success threshold for livenessProbe
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 20
    periodSeconds: 5
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
  ## @param replica.readinessProbe.enabled Enable readinessProbe on Redis&reg; replicas nodes
  ## @param replica.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
  ## @param replica.readinessProbe.periodSeconds Period seconds for readinessProbe
  ## @param replica.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
  ## @param replica.readinessProbe.failureThreshold Failure threshold for readinessProbe
  ## @param replica.readinessProbe.successThreshold Success threshold for readinessProbe
  ##
  readinessProbe:
    enabled: true
    initialDelaySeconds: 20
    periodSeconds: 5
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 5
  ## @param replica.customStartupProbe Custom startupProbe that overrides the default one
  ##
  customStartupProbe: {}
  ## @param replica.customLivenessProbe Custom livenessProbe that overrides the default one
  ##
  customLivenessProbe: {}
  ## @param replica.customReadinessProbe Custom readinessProbe that overrides the default one
  ##
  customReadinessProbe: {}
  ## Redis&reg; replicas resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## @param replica.resources.limits The resources limits for the Redis&reg; replicas containers
  ## @param replica.resources.requests The requested resources for the Redis&reg; replicas containers
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: #{}
      cpu: "500m"
      memory: "1024Mi"
    #   cpu: 250m
    #   memory: 256Mi
    requests: {}
    #   cpu: 250m
    #   memory: 256Mi
  ## Configure Pods Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
  ## @param replica.podSecurityContext.enabled Enabled Redis&reg; replicas pods' Security Context
  ## @param replica.podSecurityContext.fsGroup Set Redis&reg; replicas pod's Security Context fsGroup
  ##
  podSecurityContext:
    enabled: true
    runAsGroup: 1000
    runAsUser: 1000
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault
  ## Configure Container Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
  ## @param replica.containerSecurityContext.enabled Enabled Redis&reg; replicas containers' Security Context
  ## @param replica.containerSecurityContext.runAsUser Set Redis&reg; replicas containers' Security Context runAsUser
  ##
  containerSecurityContext:
    enabled: true
    runAsGroup: 1000
    runAsUser: 1000
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false
    seccompProfile:
      type: RuntimeDefault
  ## @param replica.schedulerName Alternate scheduler for Redis&reg; replicas pods
  ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
  ##
  schedulerName: ""
  ## @param replica.updateStrategy.type Redis&reg; replicas statefulset strategy type
  ## @skip replica.updateStrategy.rollingUpdate
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
  ##
  updateStrategy:
    ## StrategyType
    ## Can be set to RollingUpdate, OnDelete (statefulset), Recreate (deployment)
    ##
    type: RollingUpdate
  ## @param replica.minReadySeconds How many seconds a pod needs to be ready before killing the next, during update
  ##
  minReadySeconds: 0
  ## @param replica.priorityClassName Redis&reg; replicas pods' priorityClassName
  ##
  priorityClassName: ""
  ## @param replica.podManagementPolicy podManagementPolicy to manage scaling operation of %%MAIN_CONTAINER_NAME%% pods
  ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
  ##
  podManagementPolicy: ""
  ## @param replica.hostAliases Redis&reg; replicas pods host aliases
  ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
  ##
  hostAliases: []
  ## @param replica.podLabels Extra labels for Redis&reg; replicas pods
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels: {}
  ## @param replica.podAnnotations Annotations for Redis&reg; replicas pods
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations: #{}
    sidecar.istio.io/proxyCPULimit: "900m"
    sidecar.istio.io/proxyMemoryLimit: "1024Mi"
    # sidecar.istio.io/proxyCPU: "450m"
    # sidecar.istio.io/proxyMemory: "512Mi"
  ## @param replica.shareProcessNamespace Share a single process namespace between all of the containers in Redis&reg; replicas pods
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/
  ##
  shareProcessNamespace: false
  ## @param replica.podAffinityPreset Pod affinity preset. Ignored if `replica.affinity` is set. Allowed values: `soft` or `hard`
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
  ##
  podAffinityPreset: ""
  ## @param replica.podAntiAffinityPreset Pod anti-affinity preset. Ignored if `replica.affinity` is set. Allowed values: `soft` or `hard`
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
  ##
  podAntiAffinityPreset: soft
  ## Node affinity preset
  ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  ##
  nodeAffinityPreset:
    ## @param replica.nodeAffinityPreset.type Node affinity preset type. Ignored if `replica.affinity` is set. Allowed values: `soft` or `hard`
    ##
    type: ""
    ## @param replica.nodeAffinityPreset.key Node label key to match. Ignored if `replica.affinity` is set
    ##
    key: ""
    ## @param replica.nodeAffinityPreset.values Node label values to match. Ignored if `replica.affinity` is set
    ## E.g.
    ## values:
    ##   - e2e-az1
    ##   - e2e-az2
    ##
    values: []
  ## @param replica.affinity Affinity for Redis&reg; replicas pods assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ## NOTE: `replica.podAffinityPreset`, `replica.podAntiAffinityPreset`, and `replica.nodeAffinityPreset` will be ignored when it's set
  ##
  affinity: {}
  ## @param replica.nodeSelector Node labels for Redis&reg; replicas pods assignment
  ## ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}
  ## @param replica.tolerations Tolerations for Redis&reg; replicas pods assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: []
  ## @param replica.topologySpreadConstraints Spread Constraints for Redis&reg; replicas pod assignment
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
  ## E.g.
  ## topologySpreadConstraints:
  ##   - maxSkew: 1
  ##     topologyKey: node
  ##     whenUnsatisfiable: DoNotSchedule
  ##
  topologySpreadConstraints: []
  ## @param replica.dnsPolicy DNS Policy for Redis&reg; replica pods
  ## ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
  ## E.g.
  ## dnsPolicy: ClusterFirst
  dnsPolicy: ""
  ## @param replica.dnsConfig DNS Configuration for Redis&reg; replica pods
  ## ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
  ## E.g.
  ## dnsConfig:
  ##   options:
  ##   - name: ndots
  ##     value: "4"
  ##   - name: single-request-reopen
  dnsConfig: {}
  ## @param replica.lifecycleHooks for the Redis&reg; replica container(s) to automate configuration before or after startup
  ##
  lifecycleHooks: {}
  ## @param replica.extraVolumes Optionally specify extra list of additional volumes for the Redis&reg; replicas pod(s)
  ##
  extraVolumes: []
  ## @param replica.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Redis&reg; replicas container(s)
  ##
  extraVolumeMounts: []
  ## @param replica.sidecars Add additional sidecar containers to the Redis&reg; replicas pod(s)
  ## e.g:
  ## sidecars:
  ##   - name: your-image-name
  ##     image: your-image
  ##     imagePullPolicy: Always
  ##     ports:
  ##       - name: portname
  ##         containerPort: 1234
  ##
  sidecars: []
  ## @param replica.initContainers Add additional init containers to the Redis&reg; replicas pod(s)
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
  ## e.g:
  ## initContainers:
  ##  - name: your-image-name
  ##    image: your-image
  ##    imagePullPolicy: Always
  ##    command: ['sh', '-c', 'echo "hello world"']
  ##
  initContainers: []
  ## Persistence Parameters
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param replica.persistence.enabled Enable persistence on Redis&reg; replicas nodes using Persistent Volume Claims
    ##
    enabled: true
    ## @param replica.persistence.medium Provide a medium for `emptyDir` volumes.
    ##
    medium: ""
    ## @param replica.persistence.sizeLimit Set this to enable a size limit for `emptyDir` volumes.
    ##
    sizeLimit: ""
    ##  @param replica.persistence.path The path the volume will be mounted at on Redis&reg; replicas containers
    ## NOTE: Useful when using different Redis&reg; images
    ##
    path: /data
    ## @param replica.persistence.subPath The subdirectory of the volume to mount on Redis&reg; replicas containers
    ## NOTE: Useful in dev environments
    ##
    subPath: ""
    ## @param replica.persistence.subPathExpr Used to construct the subPath subdirectory of the volume to mount on Redis&reg; replicas containers
    ##
    subPathExpr: ""
    ## @param replica.persistence.storageClass Persistent Volume storage class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner
    ##
    storageClass: ""
    ## @param replica.persistence.accessModes Persistent Volume access modes
    ##
    accessModes:
      - ReadWriteOnce
    ## @param replica.persistence.size Persistent Volume size
    ##
    size: 8Gi
    ## @param replica.persistence.annotations Additional custom annotations for the PVC
    ##
    annotations: {}
    ## @param replica.persistence.selector Additional labels to match for the PVC
    ## e.g:
    ## selector:
    ##   matchLabels:
    ##     app: my-app
    ##
    selector: {}
    ## @param replica.persistence.dataSource Custom PVC data source
    ##
    dataSource: {}
    ## @param replica.persistence.existingClaim Use a existing PVC which must be created manually before bound
    ## NOTE: requires replica.persistence.enabled: true
    ##
    existingClaim: ""
  ## Redis&reg; replicas service parameters
  ##
  service:
    ## @param replica.service.type Redis&reg; replicas service type
    ##
    type: ClusterIP
    ## @param replica.service.ports.redis Redis&reg; replicas service port
    ##
    ports:
      redis: 6379
    ## @param replica.service.nodePorts.redis Node port for Redis&reg; replicas
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
    ## NOTE: choose port between <30000-32767>
    ##
    nodePorts:
      redis: ""
    ## @param replica.service.externalTrafficPolicy Redis&reg; replicas service external traffic policy
    ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    ##
    externalTrafficPolicy: Cluster
    ## @param replica.service.internalTrafficPolicy Redis&reg; replicas service internal traffic policy (requires Kubernetes v1.22 or greater to be usable)
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/
    ##
    internalTrafficPolicy: Cluster
    ## @param replica.service.extraPorts Extra ports to expose (normally used with the `sidecar` value)
    ##
    extraPorts: []
    ## @param replica.service.clusterIP Redis&reg; replicas service Cluster IP
    ##
    clusterIP: ""
    ## @param replica.service.loadBalancerIP Redis&reg; replicas service Load Balancer IP
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    loadBalancerIP: ""
    ## @param replica.service.loadBalancerSourceRanges Redis&reg; replicas service Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ## e.g.
    ## loadBalancerSourceRanges:
    ##   - 10.10.10.0/24
    ##
    loadBalancerSourceRanges: []
    ## @param replica.service.annotations Additional custom annotations for Redis&reg; replicas service
    ##
    annotations: {}
    ## @param replica.service.sessionAffinity Session Affinity for Kubernetes service, can be "None" or "ClientIP"
    ## If "ClientIP", consecutive client requests will be directed to the same Pod
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
    ##
    sessionAffinity: None
    ## @param replica.service.sessionAffinityConfig Additional settings for the sessionAffinity
    ## sessionAffinityConfig:
    ##   clientIP:
    ##     timeoutSeconds: 300
    ##
    sessionAffinityConfig: {}
  ## @param replica.terminationGracePeriodSeconds Integer setting the termination grace period for the redis-replicas pods
  ##
  terminationGracePeriodSeconds: 30
  ## Autoscaling configuration
  ##
  autoscaling:
    ## @param replica.autoscaling.enabled Enable replica autoscaling settings
    ##
    enabled: false
    ## @param replica.autoscaling.minReplicas Minimum replicas for the pod autoscaling
    ##
    minReplicas: 1
    ## @param replica.autoscaling.maxReplicas Maximum replicas for the pod autoscaling
    ##
    maxReplicas: 11
    ## @param replica.autoscaling.targetCPU Percentage of CPU to consider when autoscaling
    ##
    targetCPU: ""
    ## @param replica.autoscaling.targetMemory Percentage of Memory to consider when autoscaling
    ##
    targetMemory: ""
  ## ServiceAccount configuration
  ##
  serviceAccount:
    ## @param replica.serviceAccount.create Specifies whether a ServiceAccount should be created
    ##
    create: false
    ## @param replica.serviceAccount.name The name of the ServiceAccount to use.
    ## If not set and create is true, a name is generated using the common.names.fullname template
    ##
    name: ""
    ## @param replica.serviceAccount.automountServiceAccountToken Whether to auto mount the service account token
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
    ##
    automountServiceAccountToken: false
    ## @param replica.serviceAccount.annotations Additional custom annotations for the ServiceAccount
    ##
    annotations: {}
## @section Redis&reg; Sentinel configuration parameters
##

sentinel:
  ## @param sentinel.enabled Use Redis&reg; Sentinel on Redis&reg; pods.
  ## IMPORTANT: this will disable the master and replicas services and
  ## create a single Redis&reg; service exposing both the Redis and Sentinel ports
  ##
  enabled: true
  ## Bitnami Redis&reg; Sentinel image version
  ## ref: https://hub.docker.com/r/bitnami/redis-sentinel/tags/
  ## @param sentinel.image.registry Redis&reg; Sentinel image registry
  ## @param sentinel.image.repository Redis&reg; Sentinel image repository
  ## @param sentinel.image.tag Redis&reg; Sentinel image tag (immutable tags are recommended)
  ## @param sentinel.image.digest Redis&reg; Sentinel image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
  ## @param sentinel.image.pullPolicy Redis&reg; Sentinel image pull policy
  ## @param sentinel.image.pullSecrets Redis&reg; Sentinel image pull secrets
  ## @param sentinel.image.debug Enable image debug mode
  ##
  image:
    registry: docker.io
    repository: bitnami/redis-sentinel
    tag: 7.0.8-debian-11-r11
    digest: ""
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
    ## Enable debug mode
    ##
    debug: false
  ## @param sentinel.masterSet Master set name
  ##
  masterSet: redis
  ## @param sentinel.quorum Sentinel Quorum
  ##
  quorum: 2
  ## @param sentinel.getMasterTimeout Amount of time to allow before get_sentinel_master_info() times out.
  ## NOTE: This is directly related to the startupProbes which are configured to run every 10 seconds for a total of 22 failures. If adjusting this value, also adjust the startupProbes.
  getMasterTimeout: 220
  ## @param sentinel.automateClusterRecovery Automate cluster recovery in cases where the last replica is not considered a good replica and Sentinel won't automatically failover to it.
  ## This also prevents any new replica from starting until the last remaining replica is elected as master to guarantee that it is the one to be elected by Sentinel, and not a newly started replica with no data.
  ## NOTE: This feature requires a "downAfterMilliseconds" value less or equal to 2000.
  ##
  automateClusterRecovery: false
  ## @param sentinel.redisShutdownWaitFailover Whether the Redis&reg; master container waits for the failover at shutdown (in addition to the Redis&reg; Sentinel container).
  redisShutdownWaitFailover: true
  ## Sentinel timing restrictions
  ## @param sentinel.downAfterMilliseconds Timeout for detecting a Redis&reg; node is down
  ## @param sentinel.failoverTimeout Timeout for performing a election failover
  ##
  downAfterMilliseconds: 60000
  failoverTimeout: 180000
  ## @param sentinel.parallelSyncs Number of replicas that can be reconfigured in parallel to use the new master after a failover
  ##
  parallelSyncs: 1
  ## @param sentinel.configuration Configuration for Redis&reg; Sentinel nodes
  ## ref: https://redis.io/topics/sentinel
  ##
  configuration: ""
  ## @param sentinel.command Override default container command (useful when using custom images)
  ##
  command: []
  ## @param sentinel.args Override default container args (useful when using custom images)
  ##
  args: []
  ## @param sentinel.preExecCmds Additional commands to run prior to starting Redis&reg; Sentinel
  ##
  preExecCmds: []
  ## @param sentinel.extraEnvVars Array with extra environment variables to add to Redis&reg; Sentinel nodes
  ## e.g:
  ## extraEnvVars:
  ##   - name: FOO
  ##     value: "bar"
  ##
  extraEnvVars: []
  ## @param sentinel.extraEnvVarsCM Name of existing ConfigMap containing extra env vars for Redis&reg; Sentinel nodes
  ##
  extraEnvVarsCM: ""
  ## @param sentinel.extraEnvVarsSecret Name of existing Secret containing extra env vars for Redis&reg; Sentinel nodes
  ##
  extraEnvVarsSecret: ""
  ## @param sentinel.externalMaster.enabled Use external master for bootstrapping
  ## @param sentinel.externalMaster.host External master host to bootstrap from
  ## @param sentinel.externalMaster.port Port for Redis service external master host
  ##
  externalMaster:
    enabled: false
    host: ""
    port: 6379
  ## @param sentinel.containerPorts.sentinel Container port to open on Redis&reg; Sentinel nodes
  ##
  containerPorts:
    sentinel: 26379
  ## Configure extra options for Redis&reg; containers' liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
  ## @param sentinel.startupProbe.enabled Enable startupProbe on Redis&reg; Sentinel nodes
  ## @param sentinel.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
  ## @param sentinel.startupProbe.periodSeconds Period seconds for startupProbe
  ## @param sentinel.startupProbe.timeoutSeconds Timeout seconds for startupProbe
  ## @param sentinel.startupProbe.failureThreshold Failure threshold for startupProbe
  ## @param sentinel.startupProbe.successThreshold Success threshold for startupProbe
  ##
  startupProbe:
    enabled: true
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 22
  ## @param sentinel.livenessProbe.enabled Enable livenessProbe on Redis&reg; Sentinel nodes
  ## @param sentinel.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
  ## @param sentinel.livenessProbe.periodSeconds Period seconds for livenessProbe
  ## @param sentinel.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
  ## @param sentinel.livenessProbe.failureThreshold Failure threshold for livenessProbe
  ## @param sentinel.livenessProbe.successThreshold Success threshold for livenessProbe
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 20
    periodSeconds: 5
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
  ## @param sentinel.readinessProbe.enabled Enable readinessProbe on Redis&reg; Sentinel nodes
  ## @param sentinel.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
  ## @param sentinel.readinessProbe.periodSeconds Period seconds for readinessProbe
  ## @param sentinel.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
  ## @param sentinel.readinessProbe.failureThreshold Failure threshold for readinessProbe
  ## @param sentinel.readinessProbe.successThreshold Success threshold for readinessProbe
  ##
  readinessProbe:
    enabled: true
    initialDelaySeconds: 20
    periodSeconds: 5
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 5
  ## @param sentinel.customStartupProbe Custom startupProbe that overrides the default one
  ##
  customStartupProbe: {}
  ## @param sentinel.customLivenessProbe Custom livenessProbe that overrides the default one
  ##
  customLivenessProbe: {}
  ## @param sentinel.customReadinessProbe Custom readinessProbe that overrides the default one
  ##
  customReadinessProbe: {}
  ## Persistence parameters
  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
  ##
  persistence:
    ## @param sentinel.persistence.enabled Enable persistence on Redis&reg; sentinel nodes using Persistent Volume Claims (Experimental)
    ##
    enabled: false
    ## @param sentinel.persistence.storageClass Persistent Volume storage class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner
    ##
    storageClass: ""
    ## @param sentinel.persistence.accessModes Persistent Volume access modes
    ##
    accessModes:
      - ReadWriteOnce
    ## @param sentinel.persistence.size Persistent Volume size
    ##
    size: 100Mi
    ## @param sentinel.persistence.annotations Additional custom annotations for the PVC
    ##
    annotations: {}
    ## @param sentinel.persistence.selector Additional labels to match for the PVC
    ## e.g:
    ## selector:
    ##   matchLabels:
    ##     app: my-app
    ##
    selector: {}
    ## @param sentinel.persistence.dataSource Custom PVC data source
    ##
    dataSource: {}
    ## @param sentinel.persistence.medium Provide a medium for `emptyDir` volumes.
    ##
    medium: ""
    ## @param sentinel.persistence.sizeLimit Set this to enable a size limit for `emptyDir` volumes.
    ##
    sizeLimit: ""
  ## Redis&reg; Sentinel resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## @param sentinel.resources.limits The resources limits for the Redis&reg; Sentinel containers
  ## @param sentinel.resources.requests The requested resources for the Redis&reg; Sentinel containers
  ##
  resources:
    limits: #{}
      cpu: "500m"
      memory: "1024Mi"
    requests: {}
  ## Configure Container Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
  ## @param sentinel.containerSecurityContext.enabled Enabled Redis&reg; Sentinel containers' Security Context
  ## @param sentinel.containerSecurityContext.runAsUser Set Redis&reg; Sentinel containers' Security Context runAsUser
  ##
  containerSecurityContext:
    enabled: true
    runAsGroup: 1000
    runAsUser: 1000
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false
  ## @param sentinel.lifecycleHooks for the Redis&reg; sentinel container(s) to automate configuration before or after startup
  ##
  lifecycleHooks: {}
  ## @param sentinel.extraVolumes Optionally specify extra list of additional volumes for the Redis&reg; Sentinel
  ##
  extraVolumes: []
  ## @param sentinel.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Redis&reg; Sentinel container(s)
  ##
  extraVolumeMounts: []
  ## Redis&reg; Sentinel service parameters
  ##
  service:
    ## @param sentinel.service.type Redis&reg; Sentinel service type
    ##
    type: ClusterIP
    ## @param sentinel.service.ports.redis Redis&reg; service port for Redis&reg;
    ## @param sentinel.service.ports.sentinel Redis&reg; service port for Redis&reg; Sentinel
    ##
    ports:
      redis: 6379
      sentinel: 26379
    ## @param sentinel.service.nodePorts.redis Node port for Redis&reg;
    ## @param sentinel.service.nodePorts.sentinel Node port for Sentinel
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
    ## NOTE: choose port between <30000-32767>
    ## NOTE: By leaving these values blank, they will be generated by ports-configmap
    ##       If setting manually, please leave at least replica.replicaCount + 1 in between sentinel.service.nodePorts.redis and sentinel.service.nodePorts.sentinel to take into account the ports that will be created while incrementing that base port
    ##
    nodePorts:
      redis: ""
      sentinel: ""
    ## @param sentinel.service.externalTrafficPolicy Redis&reg; Sentinel service external traffic policy
    ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    ##
    externalTrafficPolicy: Cluster
    ## @param sentinel.service.extraPorts Extra ports to expose (normally used with the `sidecar` value)
    ##
    extraPorts: []
    ## @param sentinel.service.clusterIP Redis&reg; Sentinel service Cluster IP
    ##
    clusterIP: ""
    ## @param sentinel.service.loadBalancerIP Redis&reg; Sentinel service Load Balancer IP
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    loadBalancerIP: ""
    ## @param sentinel.service.loadBalancerSourceRanges Redis&reg; Sentinel service Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ## e.g.
    ## loadBalancerSourceRanges:
    ##   - 10.10.10.0/24
    ##
    loadBalancerSourceRanges: []
    ## @param sentinel.service.annotations Additional custom annotations for Redis&reg; Sentinel service
    ##
    annotations: {}
    ## @param sentinel.service.sessionAffinity Session Affinity for Kubernetes service, can be "None" or "ClientIP"
    ## If "ClientIP", consecutive client requests will be directed to the same Pod
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
    ##
    sessionAffinity: None
    ## @param sentinel.service.sessionAffinityConfig Additional settings for the sessionAffinity
    ## sessionAffinityConfig:
    ##   clientIP:
    ##     timeoutSeconds: 300
    ##
    sessionAffinityConfig: {}
  ## @param sentinel.terminationGracePeriodSeconds Integer setting the termination grace period for the redis-node pods
  ##
  terminationGracePeriodSeconds: 30

## @section Other Parameters
##

## Network Policy configuration
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
networkPolicy:
  ## @param networkPolicy.enabled Enable creation of NetworkPolicy resources
  ##
  enabled: false
  ## @param networkPolicy.allowExternal Don't require client label for connections
  ## When set to false, only pods with the correct client label will have network access to the ports
  ## Redis&reg; is listening on. When true, Redis&reg; will accept connections from any source
  ## (with the correct destination port).
  ##
  allowExternal: true
  ## @param networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy
  ## e.g:
  ## extraIngress:
  ##   - ports:
  ##       - port: 1234
  ##     from:
  ##       - podSelector:
  ##           - matchLabels:
  ##               - role: frontend
  ##       - podSelector:
  ##           - matchExpressions:
  ##               - key: role
  ##                 operator: In
  ##                 values:
  ##                   - frontend
  ##
  extraIngress: []
  ## @param networkPolicy.extraEgress Add extra egress rules to the NetworkPolicy
  ## e.g:
  ## extraEgress:
  ##   - ports:
  ##       - port: 1234
  ##     to:
  ##       - podSelector:
  ##           - matchLabels:
  ##               - role: frontend
  ##       - podSelector:
  ##           - matchExpressions:
  ##               - key: role
  ##                 operator: In
  ##                 values:
  ##                   - frontend
  ##
  extraEgress: []
  ## @param networkPolicy.ingressNSMatchLabels Labels to match to allow traffic from other namespaces
  ## @param networkPolicy.ingressNSPodMatchLabels Pod labels to match to allow traffic from other namespaces
  ##
  ingressNSMatchLabels: {}
  ingressNSPodMatchLabels: {}
## PodSecurityPolicy configuration
## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
##
podSecurityPolicy:
  ## @param podSecurityPolicy.create Whether to create a PodSecurityPolicy. WARNING: PodSecurityPolicy is deprecated in Kubernetes v1.21 or later, unavailable in v1.25 or later
  ##
  create: false
  ## @param podSecurityPolicy.enabled Enable PodSecurityPolicy's RBAC rules
  ##
  enabled: false
## RBAC configuration
##
rbac:
  ## @param rbac.create Specifies whether RBAC resources should be created
  ##
  create: false
  ## @param rbac.rules Custom RBAC rules to set
  ## e.g:
  ## rules:
  ##   - apiGroups:
  ##       - ""
  ##     resources:
  ##       - pods
  ##     verbs:
  ##       - get
  ##       - list
  ##
  rules: []
## ServiceAccount configuration
##
serviceAccount:
  ## @param serviceAccount.create Specifies whether a ServiceAccount should be created
  ##
  create: true
  ## @param serviceAccount.name The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the common.names.fullname template
  ##
  name: ""
  ## @param serviceAccount.automountServiceAccountToken Whether to auto mount the service account token
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
  ##
  automountServiceAccountToken: false
  ## @param serviceAccount.annotations Additional custom annotations for the ServiceAccount
  ##
  annotations: {}
## Redis&reg; Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
  ## @param pdb.create Specifies whether a PodDisruptionBudget should be created
  ##
  create: false
  ## @param pdb.minAvailable Min number of pods that must still be available after the eviction
  ##
  minAvailable: 1
  ## @param pdb.maxUnavailable Max number of pods that can be unavailable after the eviction
  ##
  maxUnavailable: ""
## TLS configuration
##
tls:
  ## @param tls.enabled Enable TLS traffic
  ##
  enabled: false
  ## @param tls.authClients Require clients to authenticate
  ##
  authClients: true
  ## @param tls.autoGenerated Enable autogenerated certificates
  ##
  autoGenerated: false
  ## @param tls.existingSecret The name of the existing secret that contains the TLS certificates
  ##
  existingSecret: ""
  ## @param tls.certificatesSecret DEPRECATED. Use existingSecret instead.
  ##
  certificatesSecret: ""
  ## @param tls.certFilename Certificate filename
  ##
  certFilename: ""
  ## @param tls.certKeyFilename Certificate Key filename
  ##
  certKeyFilename: ""
  ## @param tls.certCAFilename CA Certificate filename
  ##
  certCAFilename: ""
  ## @param tls.dhParamsFilename File containing DH params (in order to support DH based ciphers)
  ##
  dhParamsFilename: ""

## @section Metrics Parameters
##

metrics:
  ## @param metrics.enabled Start a sidecar prometheus exporter to expose Redis&reg; metrics
  ##
  enabled: false
  ## Bitnami Redis&reg; Exporter image
  ## ref: https://hub.docker.com/r/bitnami/redis-exporter/tags/
  ## @param metrics.image.registry Redis&reg; Exporter image registry
  ## @param metrics.image.repository Redis&reg; Exporter image repository
  ## @param metrics.image.tag Redis&reg; Exporter image tag (immutable tags are recommended)
  ## @param metrics.image.digest Redis&reg; Exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
  ## @param metrics.image.pullPolicy Redis&reg; Exporter image pull policy
  ## @param metrics.image.pullSecrets Redis&reg; Exporter image pull secrets
  ##
  image:
    registry: docker.io
    repository: bitnami/redis-exporter
    tag: 1.46.0-debian-11-r5
    digest: ""
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## Configure extra options for Redis&reg; containers' liveness, readiness & startup probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
  ## @param metrics.startupProbe.enabled Enable startupProbe on Redis&reg; replicas nodes
  ## @param metrics.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
  ## @param metrics.startupProbe.periodSeconds Period seconds for startupProbe
  ## @param metrics.startupProbe.timeoutSeconds Timeout seconds for startupProbe
  ## @param metrics.startupProbe.failureThreshold Failure threshold for startupProbe
  ## @param metrics.startupProbe.successThreshold Success threshold for startupProbe
  ##
  startupProbe:
    enabled: false
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
  ## @param metrics.livenessProbe.enabled Enable livenessProbe on Redis&reg; replicas nodes
  ## @param metrics.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
  ## @param metrics.livenessProbe.periodSeconds Period seconds for livenessProbe
  ## @param metrics.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
  ## @param metrics.livenessProbe.failureThreshold Failure threshold for livenessProbe
  ## @param metrics.livenessProbe.successThreshold Success threshold for livenessProbe
  ##
  livenessProbe:
    enabled: true
    initialDelaySeconds: 10
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
  ## @param metrics.readinessProbe.enabled Enable readinessProbe on Redis&reg; replicas nodes
  ## @param metrics.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
  ## @param metrics.readinessProbe.periodSeconds Period seconds for readinessProbe
  ## @param metrics.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
  ## @param metrics.readinessProbe.failureThreshold Failure threshold for readinessProbe
  ## @param metrics.readinessProbe.successThreshold Success threshold for readinessProbe
  ##
  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  ## @param metrics.customStartupProbe Custom startupProbe that overrides the default one
  ##
  customStartupProbe: {}
  ## @param metrics.customLivenessProbe Custom livenessProbe that overrides the default one
  ##
  customLivenessProbe: {}
  ## @param metrics.customReadinessProbe Custom readinessProbe that overrides the default one
  ##
  customReadinessProbe: {}
  ## @param metrics.command Override default metrics container init command (useful when using custom images)
  ##
  command: []
  ## @param metrics.redisTargetHost A way to specify an alternative Redis&reg; hostname
  ## Useful for certificate CN/SAN matching
  ##
  redisTargetHost: "localhost"
  ## @param metrics.extraArgs Extra arguments for Redis&reg; exporter, for example:
  ## e.g.:
  ## extraArgs:
  ##   check-keys: myKey,myOtherKey
  ##
  extraArgs: {}
  ## @param metrics.extraEnvVars Array with extra environment variables to add to Redis&reg; exporter
  ## e.g:
  ## extraEnvVars:
  ##   - name: FOO
  ##     value: "bar"
  ##
  extraEnvVars: []
  ## Configure Container Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
  ## @param metrics.containerSecurityContext.enabled Enabled Redis&reg; exporter containers' Security Context
  ## @param metrics.containerSecurityContext.runAsUser Set Redis&reg; exporter containers' Security Context runAsUser
  ##
  containerSecurityContext:
    enabled: true
    runAsGroup: 1000
    runAsUser: 1000
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false
  ## @param metrics.extraVolumes Optionally specify extra list of additional volumes for the Redis&reg; metrics sidecar
  ##
  extraVolumes: []
  ## @param metrics.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Redis&reg; metrics sidecar
  ##
  extraVolumeMounts: []
  ## Redis&reg; exporter resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## @param metrics.resources.limits The resources limits for the Redis&reg; exporter container
  ## @param metrics.resources.requests The requested resources for the Redis&reg; exporter container
  ##
  resources:
    limits: #{}
      cpu: "500m"
      memory: "1024Mi"
    requests: {}
  ## @param metrics.podLabels Extra labels for Redis&reg; exporter pods
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
  ##
  podLabels: {}
  ## @param metrics.podAnnotations [object] Annotations for Redis&reg; exporter pods
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9121"
    sidecar.istio.io/proxyCPULimit: "900m"
    sidecar.istio.io/proxyMemoryLimit: "1024Mi"
    # sidecar.istio.io/proxyCPU: "450m"
    # sidecar.istio.io/proxyMemory: "512Mi"
  ## Redis&reg; exporter service parameters
  ##
  service:
    ## @param metrics.service.type Redis&reg; exporter service type
    ##
    type: ClusterIP
    ## @param metrics.service.port Redis&reg; exporter service port
    ##
    port: 9121
    ## @param metrics.service.externalTrafficPolicy Redis&reg; exporter service external traffic policy
    ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
    ##
    externalTrafficPolicy: Cluster
    ## @param metrics.service.extraPorts Extra ports to expose (normally used with the `sidecar` value)
    ##
    extraPorts: []
    ## @param metrics.service.loadBalancerIP Redis&reg; exporter service Load Balancer IP
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    loadBalancerIP: ""
    ## @param metrics.service.loadBalancerSourceRanges Redis&reg; exporter service Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ## e.g.
    ## loadBalancerSourceRanges:
    ##   - 10.10.10.0/24
    ##
    loadBalancerSourceRanges: []
    ## @param metrics.service.annotations Additional custom annotations for Redis&reg; exporter service
    ##
    annotations: {}
  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##      https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
  ##
  serviceMonitor:
    ## @param metrics.serviceMonitor.enabled Create ServiceMonitor resource(s) for scraping metrics using PrometheusOperator
    ##
    enabled: false
    ## @param metrics.serviceMonitor.namespace The namespace in which the ServiceMonitor will be created
    ##
    namespace: ""
    ## @param metrics.serviceMonitor.interval The interval at which metrics should be scraped
    ##
    interval: 30s
    ## @param metrics.serviceMonitor.scrapeTimeout The timeout after which the scrape is ended
    ##
    scrapeTimeout: ""
    ## @param metrics.serviceMonitor.relabellings Metrics RelabelConfigs to apply to samples before scraping.
    ##
    relabellings: []
    ## @param metrics.serviceMonitor.metricRelabelings Metrics RelabelConfigs to apply to samples before ingestion.
    ##
    metricRelabelings: []
    ## @param metrics.serviceMonitor.honorLabels Specify honorLabels parameter to add the scrape endpoint
    ##
    honorLabels: false
    ## @param metrics.serviceMonitor.additionalLabels Additional labels that can be used so ServiceMonitor resource(s) can be discovered by Prometheus
    ##
    additionalLabels: {}
    ## @param metrics.serviceMonitor.podTargetLabels Labels from the Kubernetes pod to be transferred to the created metrics
    ##
    podTargetLabels: []
  ## Custom PrometheusRule to be defined
  ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
  ##
  prometheusRule:
    ## @param metrics.prometheusRule.enabled Create a custom prometheusRule Resource for scraping metrics using PrometheusOperator
    ##
    enabled: false
    ## @param metrics.prometheusRule.namespace The namespace in which the prometheusRule will be created
    ##
    namespace: ""
    ## @param metrics.prometheusRule.additionalLabels Additional labels for the prometheusRule
    ##
    additionalLabels: {}
    ## @param metrics.prometheusRule.rules Custom Prometheus rules
    ## e.g:
    ## rules:
    ##   - alert: RedisDown
    ##     expr: redis_up{service="{{ template "common.names.fullname" . }}-metrics"} == 0
    ##     for: 2m
    ##     labels:
    ##       severity: error
    ##     annotations:
    ##       summary: Redis&reg; instance {{ "{{ $labels.instance }}" }} down
    ##       description: Redis&reg; instance {{ "{{ $labels.instance }}" }} is down
    ##    - alert: RedisMemoryHigh
    ##      expr: >
    ##        redis_memory_used_bytes{service="{{ template "common.names.fullname" . }}-metrics"} * 100
    ##        /
    ##        redis_memory_max_bytes{service="{{ template "common.names.fullname" . }}-metrics"}
    ##        > 90
    ##      for: 2m
    ##      labels:
    ##        severity: error
    ##      annotations:
    ##        summary: Redis&reg; instance {{ "{{ $labels.instance }}" }} is using too much memory
    ##        description: |
    ##          Redis&reg; instance {{ "{{ $labels.instance }}" }} is using {{ "{{ $value }}" }}% of its available memory.
    ##    - alert: RedisKeyEviction
    ##      expr: |
    ##        increase(redis_evicted_keys_total{service="{{ template "common.names.fullname" . }}-metrics"}[5m]) > 0
    ##      for: 1s
    ##      labels:
    ##        severity: error
    ##      annotations:
    ##        summary: Redis&reg; instance {{ "{{ $labels.instance }}" }} has evicted keys
    ##        description: |
    ##          Redis&reg; instance {{ "{{ $labels.instance }}" }} has evicted {{ "{{ $value }}" }} keys in the last 5 minutes.
    ##
    rules: []

## @section Init Container Parameters
##

## 'volumePermissions' init container parameters
## Changes the owner and group of the persistent volume mount point to runAsUser:fsGroup values
##   based on the *podSecurityContext/*containerSecurityContext parameters
##
volumePermissions:
  ## @param volumePermissions.enabled Enable init container that changes the owner/group of the PV mount point to `runAsUser:fsGroup`
  ##
  enabled: false
  ## Bitnami Shell image
  ## ref: https://hub.docker.com/r/bitnami/bitnami-shell/tags/
  ## @param volumePermissions.image.registry Bitnami Shell image registry
  ## @param volumePermissions.image.repository Bitnami Shell image repository
  ## @param volumePermissions.image.tag Bitnami Shell image tag (immutable tags are recommended)
  ## @param volumePermissions.image.digest Bitnami Shell image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
  ## @param volumePermissions.image.pullPolicy Bitnami Shell image pull policy
  ## @param volumePermissions.image.pullSecrets Bitnami Shell image pull secrets
  ##
  image:
    registry: docker.io
    repository: bitnami/bitnami-shell
    tag: 11-debian-11-r86
    digest: ""
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## Init container's resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## @param volumePermissions.resources.limits The resources limits for the init container
  ## @param volumePermissions.resources.requests The requested resources for the init container
  ##
  resources:
    limits: #{}
      cpu: "500m"
      memory: "1024Mi"
    requests: {}
  ## Init container Container Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
  ## @param volumePermissions.containerSecurityContext.runAsUser Set init container's Security Context runAsUser
  ## NOTE: when runAsUser is set to special value "auto", init container will try to chown the
  ##   data folder to auto-determined user&group, using commands: `id -u`:`id -G | cut -d" " -f2`
  ##   "auto" is especially useful for OpenShift which has scc with dynamic user ids (and 0 is not allowed)
  ##
  containerSecurityContext:
    runAsGroup: 1000
    runAsUser: 1000
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false

## init-sysctl container parameters
## used to perform sysctl operation to modify Kernel settings (needed sometimes to avoid warnings)
##
sysctl:
  ## @param sysctl.enabled Enable init container to modify Kernel settings
  ##
  enabled: false
  ## Bitnami Shell image
  ## ref: https://hub.docker.com/r/bitnami/bitnami-shell/tags/
  ## @param sysctl.image.registry Bitnami Shell image registry
  ## @param sysctl.image.repository Bitnami Shell image repository
  ## @param sysctl.image.tag Bitnami Shell image tag (immutable tags are recommended)
  ## @param sysctl.image.digest Bitnami Shell image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
  ## @param sysctl.image.pullPolicy Bitnami Shell image pull policy
  ## @param sysctl.image.pullSecrets Bitnami Shell image pull secrets
  ##
  image:
    registry: docker.io
    repository: bitnami/bitnami-shell
    tag: 11-debian-11-r86
    digest: ""
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## @param sysctl.command Override default init-sysctl container command (useful when using custom images)
  ##
  command: []
  ## @param sysctl.mountHostSys Mount the host `/sys` folder to `/host-sys`
  ##
  mountHostSys: false
  ## Init container's resource requests and limits
  ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
  ## @param sysctl.resources.limits The resources limits for the init container
  ## @param sysctl.resources.requests The requested resources for the init container
  ##
  resources:
    limits: #{}
      cpu: "500m"
      memory: "1024Mi"
    requests: {}

## @section useExternalDNS Parameters
##
## @param useExternalDNS.enabled Enable various syntax that would enable external-dns to work.  Note this requires a working installation of `external-dns` to be usable.
## @param useExternalDNS.additionalAnnotations Extra annotations to be utilized when `external-dns` is enabled.
## @param useExternalDNS.annotationKey The annotation key utilized when `external-dns` is enabled. Setting this to `false` will disable annotations.
## @param useExternalDNS.suffix The DNS suffix utilized when `external-dns` is enabled.  Note that we prepend the suffix with the full name of the release.
##
useExternalDNS:
  enabled: false
  suffix: ""
  annotationKey: external-dns.alpha.kubernetes.io/
  additionalAnnotations: {}

In addition, netbox-worker pod is also losing the it's connection to redis-master when the master is moved to another pod. it cannot find the master anymore after it moved and continuously says, No master found Here are the logs:

🧬 loaded config '/etc/netbox/config/configuration.py'
🧬 loaded config '/etc/netbox/config/extra.py'
🧬 loaded config '/etc/netbox/config/logging.py'
🧬 loaded config '/etc/netbox/config/plugins.py'
No queues have been specified. This process will service the following queues by default: high, default, low
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
Could not connect to Redis instance: Connection closed by server. Retrying in 1 seconds...
    self.run()
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1788, in run
    pubsub.get_message(ignore_subscribe_messages=True, timeout=sleep_time)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1679, in get_message
    response = self.parse_response(block=(timeout is None), timeout=timeout)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1531, in parse_response
    response = self._execute(conn, try_read)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1507, in _execute
    return conn.retry.call_with_retry(
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/retry.py", line 49, in call_with_retry
    fail(error)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1509, in <lambda>
    lambda error: self._disconnect_raise_connect(conn, error),
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1496, in _disconnect_raise_connect
    raise error
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/retry.py", line 46, in call_with_retry
    return do()
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1508, in <lambda>
    lambda: command(*args, **kwargs),
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1525, in try_read
    if not conn.can_read(timeout=timeout):
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/connection.py", line 929, in can_read
    return self._parser.can_read(timeout)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/connection.py", line 340, in can_read
    return self._buffer and self._buffer.can_read(timeout)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/connection.py", line 238, in can_read
    return bool(self.unread_bytes()) or self._read_from_socket(
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/connection.py", line 211, in _read_from_socket
    raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
redis.exceptions.ConnectionError: Connection closed by server.
Scheduler [PID 8] raised an exception.
Traceback (most recent call last):
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/sentinel.py", line 58, in read_response
    return super().read_response(disable_decoding=disable_decoding)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/connection.py", line 957, in read_response
    raise response
redis.exceptions.ReadOnlyError: You can't write against a read only replica.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/netbox/venv/lib/python3.10/site-packages/rq/scheduler.py", line 236, in run
    scheduler.work()
  File "/opt/netbox/venv/lib/python3.10/site-packages/rq/scheduler.py", line 229, in work
    self.heartbeat()
  File "/opt/netbox/venv/lib/python3.10/site-packages/rq/scheduler.py", line 192, in heartbeat
    pipeline.execute()
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 2109, in execute
    return conn.retry.call_with_retry(
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/retry.py", line 49, in call_with_retry
    fail(error)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 2111, in <lambda>
    lambda error: self._disconnect_raise_reset(conn, error),
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/retry.py", line 46, in call_with_retry
    return do()
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 2110, in <lambda>
    lambda: execute(conn, stack, raise_on_error),
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1974, in _execute_transaction
    self.parse_response(connection, "_")
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 2049, in parse_response
    result = Redis.parse_response(self, connection, command_name, **options)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1275, in parse_response
    response = connection.read_response()
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/sentinel.py", line 67, in read_response
    raise ConnectionError("The previous master is now a slave")
redis.exceptions.ConnectionError: The previous master is now a slave

Process Scheduler:
Traceback (most recent call last):
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/sentinel.py", line 58, in read_response
    return super().read_response(disable_decoding=disable_decoding)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/connection.py", line 957, in read_response
    raise response
redis.exceptions.ReadOnlyError: You can't write against a read only replica.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/netbox/venv/lib/python3.10/site-packages/rq/scheduler.py", line 236, in run
    scheduler.work()
  File "/opt/netbox/venv/lib/python3.10/site-packages/rq/scheduler.py", line 229, in work
    self.heartbeat()
  File "/opt/netbox/venv/lib/python3.10/site-packages/rq/scheduler.py", line 192, in heartbeat
    pipeline.execute()
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 2109, in execute
    return conn.retry.call_with_retry(
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/retry.py", line 49, in call_with_retry
    fail(error)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 2111, in <lambda>
    lambda error: self._disconnect_raise_reset(conn, error),
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/retry.py", line 46, in call_with_retry
    return do()
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 2110, in <lambda>
    lambda: execute(conn, stack, raise_on_error),
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1974, in _execute_transaction
    self.parse_response(connection, "_")
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 2049, in parse_response
    result = Redis.parse_response(self, connection, command_name, **options)
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/client.py", line 1275, in parse_response
    response = connection.read_response()
  File "/opt/netbox/venv/lib/python3.10/site-packages/redis/sentinel.py", line 67, in read_response
    raise ConnectionError("The previous master is now a slave")
redis.exceptions.ConnectionError: The previous master is now a slave
Could not connect to Redis instance: The previous master is now a slave Retrying in 2 seconds...
Could not connect to Redis instance: The previous master is now a slave Retrying in 4 seconds...
Could not connect to Redis instance: The previous master is now a slave Retrying in 8 seconds...
Could not connect to Redis instance: The previous master is now a slave Retrying in 16 seconds...
Could not connect to Redis instance: The previous master is now a slave Retrying in 32 seconds...
Could not connect to Redis instance: The previous master is now a slave Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...
Could not connect to Redis instance: No master found for 'redis' Retrying in 60 seconds...

the following are the logs for redis nodes, redis-node-0:

 04:04:45.35 INFO  ==> about to run the command: timeout 220 redis-cli -h redis.netbox-dev.svc.cluster.local -p 26379 sentinel get-master-addr-by-name redis
Could not connect to Redis at redis.netbox-dev.svc.cluster.local:26379: Connection refused
 04:04:45.36 INFO  ==> Configuring the node as master
1:C 22 Feb 2023 04:04:45.373 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 22 Feb 2023 04:04:45.374 # Redis version=7.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 22 Feb 2023 04:04:45.374 # Configuration loaded
1:M 22 Feb 2023 04:04:45.374 * monotonic clock: POSIX clock_gettime
1:M 22 Feb 2023 04:04:45.375 * Running mode=standalone, port=6379.
1:M 22 Feb 2023 04:04:45.375 # Server initialized
1:M 22 Feb 2023 04:04:45.377 * Reading RDB base file on AOF loading...
1:M 22 Feb 2023 04:04:45.377 * Loading RDB produced by version 7.0.8
1:M 22 Feb 2023 04:04:45.377 * RDB age 4362 seconds
1:M 22 Feb 2023 04:04:45.377 * RDB memory usage when created 1.45 Mb
1:M 22 Feb 2023 04:04:45.377 * RDB is base AOF
1:M 22 Feb 2023 04:04:45.377 * Done loading RDB, keys loaded: 0, keys expired: 0.
1:M 22 Feb 2023 04:04:45.377 * DB loaded from base file appendonly.aof.7.base.rdb: 0.001 seconds
1:M 22 Feb 2023 04:04:45.377 * DB loaded from append only file: 0.001 seconds
1:M 22 Feb 2023 04:04:45.377 * Opening AOF incr file appendonly.aof.7.incr.aof on server start
1:M 22 Feb 2023 04:04:45.377 * Ready to accept connections
1:S 22 Feb 2023 04:05:57.031 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
1:S 22 Feb 2023 04:05:57.031 * Connecting to MASTER redis-node-1.redis-headless.netbox-dev.svc.cluster.local:6379
1:S 22 Feb 2023 04:05:57.032 * MASTER <-> REPLICA sync started
1:S 22 Feb 2023 04:05:57.032 * REPLICAOF redis-node-1.redis-headless.netbox-dev.svc.cluster.local:6379 enabled (user request from 'id=29 addr=127.0.0.6:51721 laddr=10.20.190.125:6379 fd=17 name=sentinel-33535e4e-cmd age=11 idle=0 flags=x db=0 sub=0 psub=0 ssub=0 multi=4 qbuf=244 qbuf-free=20230 argv-mem=4 multi-mem=224 rbs=1024 rbp=1024 obl=45 oll=0 omem=0 tot-mem=22620 events=r cmd=exec user=default redir=-1 resp=2')
1:S 22 Feb 2023 04:05:57.032 * Non blocking connect for SYNC fired the event.
1:S 22 Feb 2023 04:05:57.055 * Master replied to PING, replication can continue...
1:S 22 Feb 2023 04:05:57.057 * Trying a partial resynchronization (request 1bd35bd743633594b3321cceb4566cb7dc910205:1).
1:S 22 Feb 2023 04:06:02.112 * Full resync from master: 57f7623967f7536e0bd97a65c30e8a58626fd88a:487
1:S 22 Feb 2023 04:06:02.113 * MASTER <-> REPLICA sync: receiving streamed RDB from master with EOF to disk
1:S 22 Feb 2023 04:06:02.113 * Discarding previously cached master state.
1:S 22 Feb 2023 04:06:02.113 * MASTER <-> REPLICA sync: Flushing old data
1:S 22 Feb 2023 04:06:02.113 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 22 Feb 2023 04:06:02.116 * Loading RDB produced by version 7.0.8
1:S 22 Feb 2023 04:06:02.116 * RDB age 0 seconds
1:S 22 Feb 2023 04:06:02.116 * RDB memory usage when created 1.05 Mb
1:S 22 Feb 2023 04:06:02.116 * Done loading RDB, keys loaded: 0, keys expired: 0.
1:S 22 Feb 2023 04:06:02.116 * MASTER <-> REPLICA sync: Finished with success
1:S 22 Feb 2023 04:06:02.117 * Creating AOF incr file temp-appendonly.aof.incr on background rewrite
1:S 22 Feb 2023 04:06:02.117 * Background append only file rewriting started by pid 285
285:C 22 Feb 2023 04:06:02.119 * Successfully created the temporary AOF base file temp-rewriteaof-bg-285.aof
285:C 22 Feb 2023 04:06:02.120 * Fork CoW for AOF rewrite: current 0 MB, peak 0 MB, average 0 MB
1:S 22 Feb 2023 04:06:02.153 * Background AOF rewrite terminated with success
1:S 22 Feb 2023 04:06:02.153 * Successfully renamed the temporary AOF base file temp-rewriteaof-bg-285.aof into appendonly.aof.8.base.rdb
1:S 22 Feb 2023 04:06:02.153 * Successfully renamed the temporary AOF incr file temp-appendonly.aof.incr into appendonly.aof.8.incr.aof
1:S 22 Feb 2023 04:06:02.156 * Removing the history file appendonly.aof.7.incr.aof in the background
1:S 22 Feb 2023 04:06:02.156 * Removing the history file appendonly.aof.7.base.rdb in the background
1:S 22 Feb 2023 04:06:02.159 * Background AOF rewrite finished successfully

redis-node-1

 04:05:20.02 INFO  ==> about to run the command: timeout 220 redis-cli -h redis.netbox-dev.svc.cluster.local -p 26379 sentinel get-master-addr-by-name redis
Could not connect to Redis at redis.netbox-dev.svc.cluster.local:26379: Connection refused
 04:05:20.03 INFO  ==> Configuring the node as master
1:C 22 Feb 2023 04:05:20.046 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 22 Feb 2023 04:05:20.046 # Redis version=7.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 22 Feb 2023 04:05:20.046 # Configuration loaded
1:M 22 Feb 2023 04:05:20.046 * monotonic clock: POSIX clock_gettime
1:M 22 Feb 2023 04:05:20.047 * Running mode=standalone, port=6379.
1:M 22 Feb 2023 04:05:20.047 # Server initialized
1:M 22 Feb 2023 04:05:20.048 * Reading RDB base file on AOF loading...
1:M 22 Feb 2023 04:05:20.048 * Loading RDB produced by version 7.0.8
1:M 22 Feb 2023 04:05:20.048 * RDB age 6547 seconds
1:M 22 Feb 2023 04:05:20.048 * RDB memory usage when created 1.56 Mb
1:M 22 Feb 2023 04:05:20.048 * RDB is base AOF
1:M 22 Feb 2023 04:05:20.048 * Done loading RDB, keys loaded: 2, keys expired: 0.
1:M 22 Feb 2023 04:05:20.048 * DB loaded from base file appendonly.aof.2.base.rdb: 0.001 seconds
1:M 22 Feb 2023 04:05:20.049 * DB loaded from incr file appendonly.aof.2.incr.aof: 0.001 seconds
1:M 22 Feb 2023 04:05:20.049 * DB loaded from append only file: 0.001 seconds
1:M 22 Feb 2023 04:05:20.049 * Opening AOF incr file appendonly.aof.2.incr.aof on server start
1:M 22 Feb 2023 04:05:20.049 * Ready to accept connections
1:M 22 Feb 2023 04:05:57.058 * Replica redis-node-0.redis-headless.netbox-dev.svc.cluster.local:6379 asks for synchronization
1:M 22 Feb 2023 04:05:57.058 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for '1bd35bd743633594b3321cceb4566cb7dc910205', my replication IDs are 'b259c9932c90e5a63495e859bd539e010e893e35' and '0000000000000000000000000000000000000000')
1:M 22 Feb 2023 04:05:57.058 * Replication backlog created, my new replication IDs are '57f7623967f7536e0bd97a65c30e8a58626fd88a' and '0000000000000000000000000000000000000000'
1:M 22 Feb 2023 04:05:57.058 * Delay next BGSAVE for diskless SYNC
1:M 22 Feb 2023 04:06:02.111 * Starting BGSAVE for SYNC with target: replicas sockets
1:M 22 Feb 2023 04:06:02.111 * Background RDB transfer started by pid 130
130:C 22 Feb 2023 04:06:02.112 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 22 Feb 2023 04:06:02.112 # Diskless rdb transfer, done reading from pipe, 1 replicas still up.
1:M 22 Feb 2023 04:06:02.118 * Background RDB transfer terminated with success
1:M 22 Feb 2023 04:06:02.118 * Streamed RDB transfer with replica redis-node-0.redis-headless.netbox-dev.svc.cluster.local:6379 succeeded (socket). Waiting for REPLCONF ACK from slave to enable streaming
1:M 22 Feb 2023 04:06:02.118 * Synchronization with replica redis-node-0.redis-headless.netbox-dev.svc.cluster.local:6379 succeeded
ghost commented 1 year ago

@bootc, do you know anyone who can help us know what probes are these in the logs that gets 500 responses when the slaves are not found? It starts to get annoying when they go down while you use them.

redis.sentinel.SlaveNotFoundError: No slave found for 'redis'
127.0.0.6 - - [16/Feb/2023:07:51:29 +0000] "GET /login/ HTTP/1.1" 500 1605 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:51:39 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:51:49 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:51:59 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:52:09 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:52:19 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:52:29 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:52:39 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:52:49 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:52:59 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
redis.sentinel.SlaveNotFoundError: No slave found for 'redis'
127.0.0.6 - - [16/Feb/2023:07:53:09 +0000] "GET /login/ HTTP/1.1" 500 1605 "-" "kube-probe/1.22+"
redis.sentinel.SlaveNotFoundError: No slave found for 'redis'
127.0.0.6 - - [16/Feb/2023:07:53:19 +0000] "GET /login/ HTTP/1.1" 500 1605 "-" "kube-probe/1.22+"
redis.sentinel.SlaveNotFoundError: No slave found for 'redis'
127.0.0.6 - - [16/Feb/2023:07:53:29 +0000] "GET /login/ HTTP/1.1" 500 1605 "-" "kube-probe/1.22+"
redis.sentinel.SlaveNotFoundError: No slave found for 'redis'
127.0.0.6 - - [16/Feb/2023:07:53:29 +0000] "GET /login/ HTTP/1.1" 500 1605 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:53:39 +0000] "GET /login/ HTTP/1.1" 500 1605 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:53:49 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:53:59 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:54:09 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"
127.0.0.6 - - [16/Feb/2023:07:54:19 +0000] "GET /login/ HTTP/1.1" 200 5120 "-" "kube-probe/1.22+"