Closed baskinsy closed 1 year ago
Hi, Could you indicate the parameters you used ? I would like to try to reproduce it on my side.
This is my complete values.yaml
## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
##
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
storageClass: ""
## @section Common parameters
##
## @param kubeVersion Override Kubernetes version
##
kubeVersion: ""
## @param nameOverride String to partially override common.names.fullname (will maintain the release name)
##
nameOverride: ""
## @param fullnameOverride String to fully override common.names.fullname
##
fullnameOverride: ""
## @param clusterDomain Kubernetes Cluster Domain
##
clusterDomain: cluster.local
## @param commonLabels Labels to add to all deployed objects
##
commonLabels: {}
## @param commonAnnotations Annotations to add to all deployed objects
##
commonAnnotations: {}
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy: []
## Enable diagnostic mode in the deployment(s)/daemonset(s)
##
diagnosticMode:
## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
##
enabled: false
## @param diagnosticMode.command Command to override all containers in the the deployment(s)/daemonset(s)
##
command:
- sleep
## @param diagnosticMode.args Args to override all containers in the the deployment(s)/daemonset(s)
##
args:
- infinity
## @section Hub deployment parameters
hub:
## @param hub.image.registry Hub image registry
## @param hub.image.repository Hub image repository
## @param hub.image.tag Hub image tag (immutable tags are recommended)
## @param hub.image.digest Hub image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param hub.image.pullPolicy Hub image pull policy
## @param hub.image.pullSecrets Hub image pull secrets
##
image:
registry: docker.io
repository: bitnami/jupyterhub
tag: 4.0.0-debian-11-r8
digest: ""
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## @param hub.baseUrl Hub base URL
##
baseUrl: /
## @param hub.adminUser Hub Dummy authenticator admin user
##
adminUser: administrator
## @param hub.password Hub Dummy authenticator password
##
password: ""
## Configuration file passed to the hub. This will be used by the jupyterhub_config.py file
## This configuration uses the values for the `singleuser` section. In the upstream chart the
## values.yaml file is mounted in the hub container. This is chart, we tried to separate both
## configuration so we could follow the Bitnami value standards
## @param hub.configuration [string] Hub configuration file (to be used by jupyterhub_config.py)
##
configuration: |
Chart:
Name: {{ .Chart.Name }}
Version: {{ .Chart.Version }}
Release:
Name: {{ .Release.Name }}
Namespace: {{ .Release.Namespace }}
Service: {{ .Release.Service }}
hub:
config:
JupyterHub:
admin_access: true
authenticator_class: nativeauthenticator.NativeAuthenticator
Authenticator:
admin_users:
- administrator
cookieSecret:
concurrentSpawnLimit: 64
consecutiveFailureLimit: 5
activeServerLimit:
db:
type: postgres
url: postgresql://{{ ternary .Values.postgresql.auth.username .Values.externalDatabase.user .Values.postgresql.enabled }}@{{ ternary (include "jupyterhub.postgresql.fullname" .) .Values.externalDatabase.host .Values.postgresql.enabled }}:{{ ternary "5432" .Values.externalDatabase.port .Values.postgresql.enabled }}/{{ ternary .Values.postgresql.auth.database .Values.externalDatabase.database .Values.postgresql.enabled }}
services: {}
allowNamedServers: false
namedServerLimitPerUser:
{{- if .Values.hub.metrics.serviceMonitor.enabled }}
authenticatePrometheus: {{ .Values.hub.metrics.authenticatePrometheus }}
{{- end }}
redirectToServer:
shutdownOnLogout:
singleuser:
podNameTemplate: {{ include "common.names.fullname" . }}-jupyter-{username}
{{- if .Values.singleuser.tolerations }}
extraTolerations: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.tolerations "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.nodeSelector }}
nodeSelector: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.nodeSelector "context" $) | nindent 4 }}
{{- end }}
networkTools:
image:
name: {{ include "jupyterhub.hubconfiguration.imageEntry" ( dict "imageRoot" .Values.auxiliaryImage "global" $) }}
tag: {{ .Values.auxiliaryImage.tag }}
digest: {{ .Values.auxiliaryImage.digest }}
pullPolicy: {{ .Values.auxiliaryImage.pullPolicy }}
pullSecrets: {{- include "jupyterhub.imagePullSecrets.list" . | nindent 8 }}
cloudMetadata:
blockWithIptables: false
events: true
extraAnnotations:
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.podAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.podAnnotations "context" $ ) | nindent 4 }}
{{- end }}
extraLabels:
hub.jupyter.org/network-access-hub: "true"
app.kubernetes.io/component: singleuser
{{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.podLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.podLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.extraEnvVars }}
extraEnv: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.extraEnvVars "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.lifecycleHooks }}
lifecycleHooks: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.lifecycleHooks "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.initContainers }}
initContainers: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.initContainers "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.sidecars }}
extraContainers: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.sidecars "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.containerSecurityContext.enabled }}
uid: {{ .Values.singleuser.containerSecurityContext.runAsUser }}
{{- end }}
{{- if .Values.singleuser.podSecurityContext.enabled }}
fsGid: {{ .Values.singleuser.podSecurityContext.fsGroup }}
{{- end }}
serviceAccountName: {{ template "jupyterhub.singleuserServiceAccountName" . }}
storage:
{{- if .Values.singleuser.persistence.enabled }}
type: dynamic
{{- else }}
type: none
{{- end }}
extraLabels:
app.kubernetes.io/component: singleuser
{{- include "common.labels.standard" . | nindent 6 }}
{{- if .Values.singleuser.extraVolumes }}
extraVolumes: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.extraVolumes "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.singleuser.extraVolumeMounts }}
extraVolumeMounts: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.extraVolumeMounts "context" $ ) | nindent 4 }}
{{- end }}
capacity: {{ .Values.singleuser.persistence.size }}
homeMountPath: {{ .Values.singleuser.notebookDir }}
dynamic:
{{ include "jupyterhub.storage.class" (dict "persistence" .Values.singleuser.persistence "global" .Values.global) }}
pvcNameTemplate: {{ include "common.names.fullname" . }}-claim-{username}{servername}
volumeNameTemplate: {{ include "common.names.fullname" . }}-volume-{username}{servername}
storageAccessModes: {{- include "common.tplvalues.render" ( dict "value" .Values.singleuser.persistence.accessModes "context" $ ) | nindent 8 }}
image:
name: {{ include "jupyterhub.hubconfiguration.imageEntry" ( dict "imageRoot" .Values.singleuser.image "global" $) }}
tag: {{ .Values.singleuser.image.tag }}
digest: {{ .Values.singleuser.image.digest }}
pullPolicy: {{ .Values.singleuser.image.pullPolicy }}
pullSecrets: {{- include "jupyterhub.imagePullSecrets.list" . | nindent 8 }}
startTimeout: 300
{{- /* We need to replace the Kubernetes memory/cpu terminology (e.g. 10Gi, 10Mi) with one compatible with Python (10G, 10M) */}}
cpu:
limit: {{ regexReplaceAll "([A-Za-z])i" (default "" .Values.singleuser.resources.limits.cpu) "${1}" }}
guarantee: {{ regexReplaceAll "([A-Za-z])i" (default "" .Values.singleuser.resources.requests.cpu) "${1}" }}
memory:
limit: {{ regexReplaceAll "([A-Za-z])i" (default "" .Values.singleuser.resources.limits.memory) "${1}" }}
guarantee: {{ regexReplaceAll "([A-Za-z])i" (default "" .Values.singleuser.resources.requests.memory) "${1}" }}
{{- if .Values.singleuser.command }}
cmd: {{- include "common.tplvalues.render" (dict "value" .Values.singleuser.command "context" $) | nindent 12 }}
{{- else }}
cmd: jupyterhub-singleuser
{{- end }}
defaultUrl:
cull:
enabled: true
users: false
removeNamedServers: false
timeout: 3600
every: 600
concurrency: 10
maxAge: 0
## @param hub.existingConfigmap Configmap with Hub init scripts (replaces the scripts in templates/hub/configmap.yml)
##
existingConfigmap: ""
## @param hub.existingSecret Secret with hub configuration (replaces the hub.configuration value) and proxy token
##
existingSecret: ""
## @param hub.command Override Hub default command
##
command: []
## @param hub.args Override Hub default args
##
args: []
## @param hub.extraEnvVars Add extra environment variables to the Hub container
## Example:
## extraEnvVars:
## - name: FOO
## value: "bar"
##
extraEnvVars: []
## @param hub.extraEnvVarsCM Name of existing ConfigMap containing extra env vars
##
extraEnvVarsCM: ""
## @param hub.extraEnvVarsSecret Name of existing Secret containing extra env vars
##
extraEnvVarsSecret: ""
## @param hub.containerPorts.http Hub container port
##
containerPorts:
http: 8081
## Configure extra options for Hub containers' liveness, readiness and startup probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
## @param hub.startupProbe.enabled Enable startupProbe on Hub containers
## @param hub.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
## @param hub.startupProbe.periodSeconds Period seconds for startupProbe
## @param hub.startupProbe.timeoutSeconds Timeout seconds for startupProbe
## @param hub.startupProbe.failureThreshold Failure threshold for startupProbe
## @param hub.startupProbe.successThreshold Success threshold for startupProbe
##
startupProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30
timeoutSeconds: 3
successThreshold: 1
## @param hub.livenessProbe.enabled Enable livenessProbe on Hub containers
## @param hub.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## @param hub.livenessProbe.periodSeconds Period seconds for livenessProbe
## @param hub.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param hub.livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param hub.livenessProbe.successThreshold Success threshold for livenessProbe
##
livenessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30
timeoutSeconds: 3
successThreshold: 1
## @param hub.readinessProbe.enabled Enable readinessProbe on Hub containers
## @param hub.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param hub.readinessProbe.periodSeconds Period seconds for readinessProbe
## @param hub.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param hub.readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param hub.readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30
timeoutSeconds: 3
successThreshold: 1
## @param hub.customStartupProbe Override default startup probe
##
customStartupProbe: {}
## @param hub.customLivenessProbe Override default liveness probe
##
customLivenessProbe: {}
## @param hub.customReadinessProbe Override default readiness probe
##
customReadinessProbe: {}
## Hub resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param hub.resources.limits The resources limits for the Hub containers
## @param hub.resources.requests The requested resources for the Hub containers
##
resources:
limits: {}
requests: {}
## hub containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param hub.containerSecurityContext.enabled Enabled Hub containers' Security Context
## @param hub.containerSecurityContext.runAsUser Set Hub container's Security Context runAsUser
## @param hub.containerSecurityContext.runAsNonRoot Set Hub container's Security Context runAsNonRoot
##
containerSecurityContext:
enabled: true
runAsUser: 1000
runAsNonRoot: true
## hub pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param hub.podSecurityContext.enabled Enabled Hub pods' Security Context
## @param hub.podSecurityContext.fsGroup Set Hub pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## @param hub.lifecycleHooks LifecycleHooks for the Hub container to automate configuration before or after startup
##
lifecycleHooks: {}
## @param hub.hostAliases Add deployment host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param hub.podLabels Add extra labels to the Hub pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param hub.podAnnotations Add extra annotations to the Hub pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## Pod affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
## @param hub.podAffinityPreset Pod affinity preset. Ignored if `hub.affinity` is set. Allowed values: `soft` or `hard`
##
podAffinityPreset: ""
## Pod anti-affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## @param hub.podAntiAffinityPreset Pod anti-affinity preset. Ignored if `hub.affinity` is set. Allowed values: `soft` or `hard`
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## @param hub.nodeAffinityPreset.type Node affinity preset type. Ignored if `hub.affinity` is set. Allowed values: `soft` or `hard`
## @param hub.nodeAffinityPreset.key Node label key to match. Ignored if `hub.affinity` is set
## @param hub.nodeAffinityPreset.values Node label values to match. Ignored if `hub.affinity` is set
##
nodeAffinityPreset:
type: ""
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## @param hub.affinity Affinity for pod assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## @param hub.nodeSelector Node labels for pod assignment.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param hub.tolerations Tolerations for pod assignment.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param hub.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods
##
topologySpreadConstraints: []
## @param hub.priorityClassName Priority Class Name
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
##
priorityClassName: ""
## @param hub.schedulerName Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param hub.terminationGracePeriodSeconds Seconds Hub pod needs to terminate gracefully
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
terminationGracePeriodSeconds: ""
## @param hub.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached
## @param hub.updateStrategy.rollingUpdate Hub deployment rolling update configuration parameters
## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
## terminate the single previous pod, so that the new, incoming pod can attach to the PV
##
updateStrategy:
type: Recreate
## rollingUpdate: {}
## @param hub.extraVolumes Optionally specify extra list of additional volumes for Hub pods
##
extraVolumes: []
## @param hub.extraVolumeMounts Optionally specify extra list of additional volumeMounts for Hub container(s)
##
extraVolumeMounts: []
## @param hub.initContainers Add additional init containers to the Hub pods
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: []
## @param hub.sidecars Add additional sidecar containers to the Hub pod
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: []
## @param hub.pdb.create Deploy Hub PodDisruptionBudget
## @param hub.pdb.minAvailable Set minimum available hub instances
## @param hub.pdb.maxUnavailable Set maximum available hub instances
##
pdb:
create: false
minAvailable: ""
maxUnavailable: ""
## @section Hub RBAC parameters
## ServiceAccount parameters
##
serviceAccount:
## @param hub.serviceAccount.create Specifies whether a ServiceAccount should be created
##
create: true
## @param hub.serviceAccount.name Override Hub service account name
## If not set and create is true, a name is generated using the fullname template
##
name: ""
## @param hub.serviceAccount.automountServiceAccountToken Allows auto mount of ServiceAccountToken on the serviceAccount created
## Can be set to false if pods using this serviceAccount do not need to use K8s API
##
automountServiceAccountToken: true
## @param hub.serviceAccount.annotations Additional custom annotations for the ServiceAccount
##
annotations: {}
## RBAC resources
##
rbac:
## @param hub.rbac.create Specifies whether RBAC resources should be created
##
create: true
## @param hub.rbac.rules Custom RBAC rules to set
## e.g:
## rules:
## - apiGroups:
## - ""
## resources:
## - pods
## verbs:
## - get
## - list
##
rules: []
## @section Hub Traffic Exposure Parameters
## Network policy
##
networkPolicy:
## @param hub.networkPolicy.enabled Deploy Hub network policies
##
enabled: true
## @param hub.networkPolicy.allowInterspaceAccess Allow communication between pods in different namespaces
##
allowInterspaceAccess: true
## @param hub.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy
##
extraIngress: ""
## @param hub.networkPolicy.extraEgress [string] Add extra ingress rules to the NetworkPolicy
##
extraEgress: |
## Hub --> Any IP:PORT
##
- to:
service:
## @param hub.service.type Hub service type
##
type: ClusterIP
## @param hub.service.ports.http Hub service HTTP port
##
ports:
http: 8081
## @param hub.service.nodePorts.http NodePort for the HTTP endpoint
## NOTE: choose port between <30000-32767>
##
nodePorts:
http: ""
## @param hub.service.sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/user-guide/services/
##
sessionAffinity: None
## @param hub.service.sessionAffinityConfig Additional settings for the sessionAffinity
## sessionAffinityConfig:
## clientIP:
## timeoutSeconds: 300
##
sessionAffinityConfig: {}
## @param hub.service.clusterIP Hub service Cluster IP
## e.g.:
## clusterIP: None
##
clusterIP: ""
## @param hub.service.loadBalancerIP Hub service Load Balancer IP
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
##
loadBalancerIP: ""
## @param hub.service.loadBalancerSourceRanges Hub service Load Balancer sources
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## e.g:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## @param hub.service.externalTrafficPolicy Hub service external traffic policy
## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
##
externalTrafficPolicy: Cluster
## @param hub.service.annotations Additional custom annotations for Hub service
##
annotations: {}
## @param hub.service.extraPorts Extra port to expose on Hub service
##
extraPorts: []
## @section Hub Metrics parameters
##
metrics:
## @param hub.metrics.authenticatePrometheus Use authentication for Prometheus
## To allow public access without authentication for prometheus metrics set environment as follows.
##
authenticatePrometheus: false
## Prometheus Operator ServiceMonitor configuration
##
serviceMonitor:
## @param hub.metrics.serviceMonitor.enabled If the operator is installed in your cluster, set to true to create a Service Monitor Entry
##
enabled: false
## @param hub.metrics.serviceMonitor.namespace Namespace which Prometheus is running in
##
namespace: ""
## @param hub.metrics.serviceMonitor.path HTTP path to scrape for metrics
##
path: /hub/metrics
## @param hub.metrics.serviceMonitor.interval Interval at which metrics should be scraped
##
interval: 30s
## @param hub.metrics.serviceMonitor.scrapeTimeout Specify the timeout after which the scrape is ended
## e.g:
## scrapeTimeout: 30s
##
scrapeTimeout: ""
## @param hub.metrics.serviceMonitor.labels Additional labels that can be used so ServiceMonitor will be discovered by Prometheus
##
labels: {}
## @param hub.metrics.serviceMonitor.selector Prometheus instance selector labels
## ref: https://github.com/bitnami/charts/tree/main/bitnami/prometheus-operator#prometheus-configuration
##
selector: {}
## @param hub.metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping
##
relabelings: []
## @param hub.metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion
##
metricRelabelings: []
## @param hub.metrics.serviceMonitor.honorLabels Specify honorLabels parameter to add the scrape endpoint
##
honorLabels: false
## @param hub.metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus.
##
jobLabel: ""
## @section Proxy deployment parameters
proxy:
## @param proxy.image.registry Proxy image registry
## @param proxy.image.repository Proxy image repository
## @param proxy.image.tag Proxy image tag (immutable tags are recommended)
## @param proxy.image.digest Proxy image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param proxy.image.pullPolicy Proxy image pull policy
## @param proxy.image.pullSecrets Proxy image pull secrets
## @param proxy.image.debug Activate verbose output
##
image:
registry: docker.io
repository: bitnami/configurable-http-proxy
tag: 4.5.5-debian-11-r16
digest: ""
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Enable debug mode
##
debug: false
## @param proxy.secretToken Proxy secret token (used for communication with the Hub)
##
secretToken: ""
## @param proxy.command Override Proxy default command
##
command: []
## @param proxy.args Override Proxy default args
##
args: []
## @param proxy.extraEnvVars Add extra environment variables to the Proxy container
## Example:
## extraEnvVars:
## - name: FOO
## value: "bar"
##
extraEnvVars: []
## @param proxy.extraEnvVarsCM Name of existing ConfigMap containing extra env vars
##
extraEnvVarsCM: ""
## @param proxy.extraEnvVarsSecret Name of existing Secret containing extra env vars
##
extraEnvVarsSecret: ""
## Container ports
## @param proxy.containerPort.api Proxy api container port
## @param proxy.containerPort.metrics Proxy metrics container port
## @param proxy.containerPort.http Proxy http container port
##
containerPort:
api: 8001
metrics: 8002
http: 8000
## Configure extra options for Proxy containers' liveness, readiness and startup probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
## @param proxy.startupProbe.enabled Enable startupProbe on Proxy containers
## @param proxy.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
## @param proxy.startupProbe.periodSeconds Period seconds for startupProbe
## @param proxy.startupProbe.timeoutSeconds Timeout seconds for startupProbe
## @param proxy.startupProbe.failureThreshold Failure threshold for startupProbe
## @param proxy.startupProbe.successThreshold Success threshold for startupProbe
##
startupProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30
timeoutSeconds: 3
successThreshold: 1
## @param proxy.livenessProbe.enabled Enable livenessProbe on Proxy containers
## @param proxy.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## @param proxy.livenessProbe.periodSeconds Period seconds for livenessProbe
## @param proxy.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param proxy.livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param proxy.livenessProbe.successThreshold Success threshold for livenessProbe
##
livenessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30
timeoutSeconds: 3
successThreshold: 1
## @param proxy.readinessProbe.enabled Enable readinessProbe on Proxy containers
## @param proxy.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param proxy.readinessProbe.periodSeconds Period seconds for readinessProbe
## @param proxy.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param proxy.readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param proxy.readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30
timeoutSeconds: 3
successThreshold: 1
## @param proxy.customStartupProbe Override default startup probe
##
customStartupProbe: {}
## @param proxy.customLivenessProbe Override default liveness probe
##
customLivenessProbe: {}
## @param proxy.customReadinessProbe Override default readiness probe
##
customReadinessProbe: {}
## Proxy resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param proxy.resources.limits The resources limits for the Proxy containers
## @param proxy.resources.requests The requested resources for the Proxy containers
##
resources:
limits: {}
requests: {}
## Proxy containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param proxy.containerSecurityContext.enabled Enabled Proxy containers' Security Context
## @param proxy.containerSecurityContext.runAsUser Set Proxy container's Security Context runAsUser
## @param proxy.containerSecurityContext.runAsNonRoot Set Proxy container's Security Context runAsNonRoot
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## Proxy pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param proxy.podSecurityContext.enabled Enabled Proxy pods' Security Context
## @param proxy.podSecurityContext.fsGroup Set Proxy pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## @param proxy.lifecycleHooks Add lifecycle hooks to the Proxy deployment
##
lifecycleHooks: {}
## Deployment pod host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
## @param proxy.hostAliases Add deployment host aliases
##
hostAliases: []
## @param proxy.podLabels Add extra labels to the Proxy pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param proxy.podAnnotations Add extra annotations to the Proxy pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param proxy.podAffinityPreset Pod affinity preset. Ignored if `proxy.affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param proxy.podAntiAffinityPreset Pod anti-affinity preset. Ignored if `proxy.affinity` is set. Allowed values: `soft` or `hard`
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## Allowed values: soft, hard
## @param proxy.nodeAffinityPreset.type Node affinity preset type. Ignored if `proxy.affinity` is set. Allowed values: `soft` or `hard`
## @param proxy.nodeAffinityPreset.key Node label key to match. Ignored if `proxy.affinity` is set
## @param proxy.nodeAffinityPreset.values Node label values to match. Ignored if `proxy.affinity` is set
##
nodeAffinityPreset:
type: ""
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## @param proxy.affinity Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## @param proxy.nodeSelector Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param proxy.tolerations Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param proxy.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods
##
topologySpreadConstraints: []
## @param proxy.priorityClassName Priority Class Name
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
##
priorityClassName: ""
## @param proxy.schedulerName Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param proxy.terminationGracePeriodSeconds Seconds Proxy pod needs to terminate gracefully
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
terminationGracePeriodSeconds: ""
## @param proxy.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached
## @param proxy.updateStrategy.rollingUpdate Proxy deployment rolling update configuration parameters
## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
## terminate the single previous pod, so that the new, incoming pod can attach to the PV
##
updateStrategy:
type: Recreate
## rollingUpdate: {}
## @param proxy.extraVolumes Optionally specify extra list of additional volumes for Proxy pods
##
extraVolumes: []
## @param proxy.extraVolumeMounts Optionally specify extra list of additional volumeMounts for Proxy container(s)
##
extraVolumeMounts: []
## @param proxy.initContainers Add additional init containers to the Proxy pods
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: []
## @param proxy.sidecars Add additional sidecar containers to the Proxy pod
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: []
## PodDisruptionBudget settings
##
pdb:
## @param proxy.pdb.create Deploy Proxy PodDisruptionBudget
##
create: false
## @param proxy.pdb.minAvailable Set minimum available proxy instances
##
minAvailable: ""
## @param proxy.pdb.maxUnavailable Set maximum available proxy instances
##
maxUnavailable: ""
## @section Proxy Traffic Exposure Parameters
##
## Network policy
##
networkPolicy:
## @param proxy.networkPolicy.enabled Deploy Proxy network policies
##
enabled: true
## @param proxy.networkPolicy.allowInterspaceAccess Allow communication between pods in different namespaces
##
allowInterspaceAccess: true
## @param proxy.networkPolicy.extraIngress [string] Add extra ingress rules to the NetworkPolicy
##
extraIngress: |
## Any IP --> Proxy
##
- ports:
- port: {{ .Values.proxy.containerPort.http }}
## @param proxy.networkPolicy.extraEgress Add extra egress rules to the NetworkPolicy
##
extraEgress: ""
service:
api:
## @param proxy.service.api.type API service type
##
type: ClusterIP
## @param proxy.service.api.ports.http API service HTTP port
##
ports:
http: 8001
## @param proxy.service.api.nodePorts.http NodePort for the HTTP endpoint
## NOTE: choose port between <30000-32767>
##
nodePorts:
http: ""
## @param proxy.service.api.sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/user-guide/services/
##
sessionAffinity: None
## @param proxy.service.api.sessionAffinityConfig Additional settings for the sessionAffinity
## sessionAffinityConfig:
## clientIP:
## timeoutSeconds: 300
##
sessionAffinityConfig: {}
## @param proxy.service.api.clusterIP Hub service Cluster IP
## e.g.:
## clusterIP: None
##
clusterIP: ""
## @param proxy.service.api.loadBalancerIP Hub service Load Balancer IP
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
##
loadBalancerIP: ""
## @param proxy.service.api.loadBalancerSourceRanges Hub service Load Balancer sources
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## e.g:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## @param proxy.service.api.externalTrafficPolicy Hub service external traffic policy
## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
##
externalTrafficPolicy: Cluster
## @param proxy.service.api.annotations Additional custom annotations for Hub service
##
annotations: {}
## @param proxy.service.api.extraPorts Extra port to expose on Hub service
##
extraPorts: []
metrics:
## @param proxy.service.metrics.type Metrics service type
##
type: ClusterIP
## @param proxy.service.metrics.ports.http Metrics service port
##
ports:
http: 8002
## @param proxy.service.metrics.nodePorts.http NodePort for the HTTP endpoint
## NOTE: choose port between <30000-32767>
##
nodePorts:
http: ""
## @param proxy.service.metrics.sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/user-guide/services/
##
sessionAffinity: None
## @param proxy.service.metrics.sessionAffinityConfig Additional settings for the sessionAffinity
## sessionAffinityConfig:
## clientIP:
## timeoutSeconds: 300
##
sessionAffinityConfig: {}
## @param proxy.service.metrics.clusterIP Hub service Cluster IP
## e.g.:
## clusterIP: None
##
clusterIP: ""
## @param proxy.service.metrics.loadBalancerIP Hub service Load Balancer IP
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
##
loadBalancerIP: ""
## @param proxy.service.metrics.loadBalancerSourceRanges Hub service Load Balancer sources
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## e.g:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## @param proxy.service.metrics.externalTrafficPolicy Hub service external traffic policy
## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
##
externalTrafficPolicy: Cluster
## @param proxy.service.metrics.annotations Additional custom annotations for Hub service
##
annotations: {}
## @param proxy.service.metrics.extraPorts Extra port to expose on Hub service
##
extraPorts: []
public:
## @param proxy.service.public.type Public service type
##
type: ClusterIP
# HTTP Port
## @param proxy.service.public.ports.http Public service HTTP port
##
ports:
http: 80
## @param proxy.service.public.nodePorts.http NodePort for the HTTP endpoint
## NOTE: choose port between <30000-32767>
##
nodePorts:
http: ""
## @param proxy.service.public.sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/user-guide/services/
##
sessionAffinity: None
## @param proxy.service.public.sessionAffinityConfig Additional settings for the sessionAffinity
## sessionAffinityConfig:
## clientIP:
## timeoutSeconds: 300
##
sessionAffinityConfig: {}
## @param proxy.service.public.clusterIP Hub service Cluster IP
## e.g.:
## clusterIP: None
##
clusterIP: ""
## @param proxy.service.public.loadBalancerIP Hub service Load Balancer IP
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
##
loadBalancerIP: ""
## @param proxy.service.public.loadBalancerSourceRanges Hub service Load Balancer sources
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## e.g:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## @param proxy.service.public.externalTrafficPolicy Hub service external traffic policy
## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
##
externalTrafficPolicy: Cluster
## @param proxy.service.public.annotations Additional custom annotations for Hub service
##
annotations: {}
## @param proxy.service.public.extraPorts Extra port to expose on Hub service
##
extraPorts: []
## Configure the ingress resource that allows you to access to your JupyterHub instance
##
ingress:
## @param proxy.ingress.enabled Set to true to enable ingress record generation
##
enabled: true
## @param proxy.ingress.apiVersion Force Ingress API version (automatically detected if not set)
##
apiVersion: ""
## @param proxy.ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)
## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster.
## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
##
ingressClassName: ""
## @param proxy.ingress.pathType Ingress path type
##
pathType: ImplementationSpecific
## @param proxy.ingress.hostname Set ingress rule hostname
##
hostname: jupyterhub-test-1.mydomaim.com
## @param proxy.ingress.path Path to the Proxy pod
## NOTE: You may need to set this to '/*' in order to use this with ALB ingress controllers
##
path: /
## @param proxy.ingress.annotations [object] Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
## Use this parameter to set the required annotations for cert-manager, see
## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
## e.g:
## annotations:
## kubernetes.io/ingress.class: nginx
## cert-manager.io/cluster-issuer: cluster-issuer-name
##
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod-cluster
kubernetes.io/ingress.class: nginx
## @param proxy.ingress.tls Enable TLS configuration for the host defined at `ingress.hostname` parameter
## TLS certificates will be retrieved from a TLS secret with name: `{{- printf "%s-tls" .Values.ingress.hostname }}`
## You can:
## - Use the `ingress.secrets` parameter to create this TLS secret
## - Rely on cert-manager to create it by setting the corresponding annotations
## - Rely on Helm to create self-signed certificates by setting `ingress.selfSigned=true`
##
tls: true
## @param proxy.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm
##
selfSigned: false
## @param proxy.ingress.extraHosts An array with additional hostname(s) to be covered with the ingress record
## e.g:
## extraHosts:
## - name: jupyterhub.local
## path: /
##
extraHosts: []
## @param proxy.ingress.extraPaths An array with additional arbitrary paths that may need to be added to the ingress under the main host
## e.g:
## extraPaths:
## - path: /*
## backend:
## serviceName: ssl-redirect
## servicePort: use-annotation
##
extraPaths: []
## @param proxy.ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record.
## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## extraTls:
## - hosts:
## - jupyterhub.local
## secretName: jupyterhub.local-tls
##
extraTls: []
## @param proxy.ingress.secrets Custom TLS certificates as secrets
## NOTE: 'key' and 'certificate' are expected in PEM format
## NOTE: 'name' should line up with a 'secretName' set further up
## If it is not set and you're using cert-manager, this is unneeded, as it will create a secret for you with valid certificates
## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created valid for 365 days
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
## e.g:
## secrets:
## - name: jupyterhub.local-tls
## key: |-
## -----BEGIN RSA PRIVATE KEY-----
## ...
## -----END RSA PRIVATE KEY-----
## certificate: |-
## -----BEGIN CERTIFICATE-----
## ...
## -----END CERTIFICATE-----
##
secrets: []
## @param proxy.ingress.extraRules Additional rules to be covered with this ingress record
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
## e.g:
## extraRules:
## - host: example.local
## http:
## path: /
## backend:
## service:
## name: example-svc
## port:
## name: http
##
extraRules: []
## @section Proxy Metrics parameters
metrics:
## Prometheus Operator ServiceMonitor configuration
##
serviceMonitor:
## @param proxy.metrics.serviceMonitor.enabled If the operator is installed in your cluster, set to true to create a Service Monitor Entry
##
enabled: false
## @param proxy.metrics.serviceMonitor.namespace Namespace which Prometheus is running in
##
namespace: ""
## @param proxy.metrics.serviceMonitor.path HTTP path to scrape for metrics
##
path: /metrics
## @param proxy.metrics.serviceMonitor.interval Interval at which metrics should be scraped
##
interval: 30s
## @param proxy.metrics.serviceMonitor.scrapeTimeout Specify the timeout after which the scrape is ended
## e.g:
## scrapeTimeout: 30s
##
scrapeTimeout: ""
## @param proxy.metrics.serviceMonitor.labels Additional labels that can be used so ServiceMonitor will be discovered by Prometheus
##
labels: {}
## @param proxy.metrics.serviceMonitor.selector Prometheus instance selector labels
## ref: https://github.com/bitnami/charts/tree/main/bitnami/prometheus-operator#prometheus-configuration
##
selector: {}
## @param proxy.metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping
##
relabelings: []
## @param proxy.metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion
##
metricRelabelings: []
## @param proxy.metrics.serviceMonitor.honorLabels Specify honorLabels parameter to add the scrape endpoint
##
honorLabels: false
## @param proxy.metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus.
##
jobLabel: ""
## @section Image puller deployment parameters
##
## Image Puller deployment parameters
##
imagePuller:
## @param imagePuller.enabled Deploy ImagePuller daemonset
##
enabled: true
## @param imagePuller.command Override ImagePuller default command
##
command: []
## @param imagePuller.args Override ImagePuller default args
##
args: []
## @param imagePuller.extraEnvVars Add extra environment variables to the ImagePuller container
## Example:
## extraEnvVars:
## - name: FOO
## value: "bar"
##
extraEnvVars: []
## @param imagePuller.extraEnvVarsCM Name of existing ConfigMap containing extra env vars
##
extraEnvVarsCM: ""
## @param imagePuller.extraEnvVarsSecret Name of existing Secret containing extra env vars
##
extraEnvVarsSecret: ""
## @param imagePuller.customStartupProbe Override default startup probe
##
customStartupProbe: {}
## @param imagePuller.customLivenessProbe Override default liveness probe
##
customLivenessProbe: {}
## @param imagePuller.customReadinessProbe Override default readiness probe
##
customReadinessProbe: {}
## ImagePuller resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param imagePuller.resources.limits The resources limits for the ImagePuller containers
## @param imagePuller.resources.requests The requested resources for the ImagePuller containers
##
resources:
limits: {}
requests: {}
## ImagePuller containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param imagePuller.containerSecurityContext.enabled Enabled ImagePuller containers' Security Context
## @param imagePuller.containerSecurityContext.runAsUser Set ImagePuller container's Security Context runAsUser
## @param imagePuller.containerSecurityContext.runAsNonRoot Set ImagePuller container's Security Context runAsNonRoot
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## ImagePuller pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param imagePuller.podSecurityContext.enabled Enabled ImagePuller pods' Security Context
## @param imagePuller.podSecurityContext.fsGroup Set ImagePuller pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## @param imagePuller.lifecycleHooks Add lifecycle hooks to the ImagePuller deployment
##
lifecycleHooks: {}
## @param imagePuller.hostAliases Add deployment host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param imagePuller.podLabels Pod extra labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param imagePuller.podAnnotations Annotations for ImagePuller pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param imagePuller.podAffinityPreset Pod affinity preset. Ignored if `imagePuller.affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param imagePuller.podAntiAffinityPreset Pod anti-affinity preset. Ignored if `imagePuller.affinity` is set. Allowed values: `soft` or `hard`
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## @param imagePuller.nodeAffinityPreset.type Node affinity preset type. Ignored if `imagePuller.affinity` is set. Allowed values: `soft` or `hard`
## @param imagePuller.nodeAffinityPreset.key Node label key to match. Ignored if `imagePuller.affinity` is set
## @param imagePuller.nodeAffinityPreset.values Node label values to match. Ignored if `imagePuller.affinity` is set
##
nodeAffinityPreset:
type: ""
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## @param imagePuller.affinity Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## @param imagePuller.nodeSelector Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param imagePuller.tolerations Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param imagePuller.topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods
##
topologySpreadConstraints: []
## @param imagePuller.priorityClassName Priority Class Name
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
##
priorityClassName: ""
## @param imagePuller.schedulerName Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param imagePuller.terminationGracePeriodSeconds Seconds ImagePuller pod needs to terminate gracefully
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
terminationGracePeriodSeconds: ""
## @param imagePuller.updateStrategy.type Update strategy - only really applicable for deployments with RWO PVs attached
## @param imagePuller.updateStrategy.rollingUpdate ImagePuller deployment rolling update configuration parameters
## If replicas = 1, an update can get "stuck", as the previous pod remains attached to the
## PV, and the "incoming" pod can never start. Changing the strategy to "Recreate" will
## terminate the single previous pod, so that the new, incoming pod can attach to the PV
##
updateStrategy:
type: OnDelete
rollingUpdate: {}
## @param imagePuller.extraVolumes Optionally specify extra list of additional volumes for ImagePuller pods
##
extraVolumes: []
## @param imagePuller.extraVolumeMounts Optionally specify extra list of additional volumeMounts for ImagePuller container(s)
##
extraVolumeMounts: []
## @param imagePuller.initContainers Add additional init containers to the ImagePuller pods
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: []
## @param imagePuller.sidecars Add additional sidecar containers to the ImagePuller pod
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: []
## @section Singleuser deployment parameters
##
## Singleuser deployment parameters
## NOTE: The values in this section are used for generating the hub.configuration value. In case you provide
## a custom hub.configuration or a configmap, these will be ignored.
## @param singleuser.image.registry Single User image registry
## @param singleuser.image.repository Single User image repository
## @param singleuser.image.tag Single User image tag (immutabe tags are recommended)
## @param singleuser.image.digest Single User image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param singleuser.image.pullPolicy Single User image pull policy
## @param singleuser.image.pullSecrets Single User image pull secrets
##
singleuser:
image:
registry: docker.io
repository: bitnami/jupyter-base-notebook
tag: 4.0.0-debian-11-r5
digest: ""
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## @param singleuser.notebookDir Notebook directory (it will be the same as the PVC volume mount)
##
notebookDir: /opt/bitnami/jupyterhub-singleuser
## @param singleuser.allowPrivilegeEscalation Controls whether a process can gain more privileges than its parent process
##
allowPrivilegeEscalation: false
## Command for running the container (set to default if not set). Use array form
## @param singleuser.command Override Single User default command
##
command: []
## @param singleuser.extraEnvVars Extra environment variables that should be set for the user pods
## ref: https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html#singleuser-extraenv
##
extraEnvVars: []
## @param singleuser.containerPort Single User container port
##
containerPort: 8888
## Singleuser resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param singleuser.resources.limits The resources limits for the Singleuser containers
## @param singleuser.resources.requests The requested resources for the Singleuser containers
##
resources:
limits: {}
requests: {}
## singleuser containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param singleuser.containerSecurityContext.enabled Enabled Single User containers' Security Context
## @param singleuser.containerSecurityContext.runAsUser Set Single User container's Security Context runAsUser
##
containerSecurityContext:
enabled: true
runAsUser: 1001
## singleuser pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param singleuser.podSecurityContext.enabled Enabled Single User pods' Security Context
## @param singleuser.podSecurityContext.fsGroup Set Single User pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## @param singleuser.podLabels Extra labels for Single User pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param singleuser.podAnnotations Annotations for Single User pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param singleuser.nodeSelector Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param singleuser.tolerations Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param singleuser.priorityClassName Single User pod priority class name
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
## @param singleuser.lifecycleHooks Add lifecycle hooks to the Single User deployment to automate configuration before or after startup
##
lifecycleHooks: {}
## @param singleuser.extraVolumes Optionally specify extra list of additional volumes for Single User pods
##
extraVolumes: []
## @param singleuser.extraVolumeMounts Optionally specify extra list of additional volumeMounts for Single User container(s)
##
extraVolumeMounts: []
## @param singleuser.initContainers Add additional init containers to the Single User pods
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: []
## @param singleuser.sidecars Add additional sidecar containers to the Single User pod
## Example:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: []
## @section Single User RBAC parameters
serviceAccount:
## @param singleuser.serviceAccount.create Specifies whether a ServiceAccount should be created
##
create: true
## @param singleuser.serviceAccount.name Override Single User service account name
## If not set and create is true, a name is generated using the fullname template
##
name: ""
## @param singleuser.serviceAccount.automountServiceAccountToken Allows auto mount of ServiceAccountToken on the serviceAccount created
## Can be set to false if pods using this serviceAccount do not need to use K8s API
##
automountServiceAccountToken: true
## @param singleuser.serviceAccount.annotations Additional custom annotations for the ServiceAccount
##
annotations: {}
## @section Single User Persistence parameters
## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
persistence:
## @param singleuser.persistence.enabled Enable persistent volume creation on Single User instances
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: true
## @param singleuser.persistence.storageClass Persistent Volumes storage class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: ""
## @param singleuser.persistence.accessModes Persistent Volumes access modes
##
accessModes:
- ReadWriteOnce
## @param singleuser.persistence.size Persistent Volumes size
##
size: 10Gi
## @section Traffic exposure parameters
networkPolicy:
## @param singleuser.networkPolicy.enabled Deploy Single User network policies
##
enabled: true
## @param singleuser.networkPolicy.allowInterspaceAccess Allow communication between pods in different namespaces
##
allowInterspaceAccess: true
## @param singleuser.networkPolicy.allowCloudMetadataAccess Allow Single User pods to access Cloud Metada endpoints
##
allowCloudMetadataAccess: false
## @param singleuser.networkPolicy.extraIngress Add extra ingress rules to the NetworkPolicy
##
extraIngress: ""
## @param singleuser.networkPolicy.extraEgress Add extra egress rules to the NetworkPolicy
##
extraEgress: ""
## @section Auxiliary image parameters
##
## @param auxiliaryImage.registry Auxiliary image registry
## @param auxiliaryImage.repository Auxiliary image repository
## @param auxiliaryImage.tag Auxiliary image tag (immutabe tags are recommended)
## @param auxiliaryImage.digest Auxiliary image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param auxiliaryImage.pullPolicy Auxiliary image pull policy
## @param auxiliaryImage.pullSecrets Auxiliary image pull secrets
##
auxiliaryImage:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r118
digest: ""
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## @section JupyterHub database parameters
## PostgreSQL chart configuration
## ref: https://github.com/bitnami/charts/blob/main/bitnami/postgresql/values.yaml
## @param postgresql.enabled Switch to enable or disable the PostgreSQL helm chart
## @param postgresql.auth.username Name for a custom user to create
## @param postgresql.auth.password Password for the custom user to create
## @param postgresql.auth.database Name for a custom database to create
## @param postgresql.auth.existingSecret Name of existing secret to use for PostgreSQL credentials
## @param postgresql.architecture PostgreSQL architecture (`standalone` or `replication`)
## @param postgresql.service.ports.postgresql PostgreSQL service port
##
postgresql:
enabled: true
auth:
username: bn_jupyterhub
password: "1234567890"
database: bitnami_jupyterhub
existingSecret: ""
architecture: standalone
service:
ports:
postgresql: 5432
## External PostgreSQL configuration
## All of these values are only used when postgresql.enabled is set to false
## @param externalDatabase.host Database host
## @param externalDatabase.port Database port number
## @param externalDatabase.user Non-root username for JupyterHub
## @param externalDatabase.password Password for the non-root username for JupyterHub
## @param externalDatabase.database JupyterHub database name
## @param externalDatabase.existingSecret Name of an existing secret resource containing the database credentials
## @param externalDatabase.existingSecretPasswordKey Name of an existing secret key containing the database credentials
##
externalDatabase:
host: ""
port: 5432
user: postgres
database: jupyterhub
password: ""
existingSecret: ""
existingSecretPasswordKey: ""
For clarity, these are the changes.
$ git diff
diff --git a/bitnami/jupyterhub/values.yaml b/bitnami/jupyterhub/values.yaml
index bc1729558..cc61bc212 100644
--- a/bitnami/jupyterhub/values.yaml
+++ b/bitnami/jupyterhub/values.yaml
@@ -88,7 +88,7 @@ hub:
baseUrl: /
## @param hub.adminUser Hub Dummy authenticator admin user
##
- adminUser: user
+ adminUser: administrator
## @param hub.password Hub Dummy authenticator password
##
password: ""
@@ -110,16 +110,10 @@ hub:
config:
JupyterHub:
admin_access: true
- authenticator_class: dummy
- DummyAuthenticator:
- {{- if .Values.hub.password }}
- password: {{ .Values.hub.password | quote }}
- {{- else }}
- password: {{ randAlphaNum 10 | quote }}
- {{- end }}
+ authenticator_class: nativeauthenticator.NativeAuthenticator
Authenticator:
admin_users:
- - {{ .Values.hub.adminUser }}
+ - administrator
cookieSecret:
concurrentSpawnLimit: 64
consecutiveFailureLimit: 5
@@ -426,8 +420,8 @@ hub:
## terminate the single previous pod, so that the new, incoming pod can attach to the PV
##
updateStrategy:
- type: RollingUpdate
- rollingUpdate: {}
+ type: Recreate
+ ## rollingUpdate: {}
## @param hub.extraVolumes Optionally specify extra list of additional volumes for Hub pods
##
extraVolumes: []
@@ -842,8 +836,8 @@ proxy:
## terminate the single previous pod, so that the new, incoming pod can attach to the PV
##
updateStrategy:
- type: RollingUpdate
## @param proxy.ingress.path Path to the Proxy pod
## NOTE: You may need to set this to '/*' in order to use this with ALB ingress controllers
##
@@ -1091,7 +1085,9 @@ proxy:
## kubernetes.io/ingress.class: nginx
## cert-manager.io/cluster-issuer: cluster-issuer-name
##
- annotations: {}
+ annotations:
+ cert-manager.io/cluster-issuer: letsencrypt-prod-cluster
+ kubernetes.io/ingress.class: nginx
## @param proxy.ingress.tls Enable TLS configuration for the host defined at `ingress.hostname` parameter
## TLS certificates will be retrieved from a TLS secret with name: `{{- printf "%s-tls" .Values.ingress.hostname }}`
## You can:
@@ -1099,7 +1095,7 @@ proxy:
## - Rely on cert-manager to create it by setting the corresponding annotations
## - Rely on Helm to create self-signed certificates by setting `ingress.selfSigned=true`
##
- tls: false
+ tls: true
## @param proxy.ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm
##
selfSigned: false
@@ -1344,7 +1340,7 @@ imagePuller:
## terminate the single previous pod, so that the new, incoming pod can attach to the PV
##
updateStrategy:
- type: RollingUpdate
+ type: OnDelete
rollingUpdate: {}
## @param imagePuller.extraVolumes Optionally specify extra list of additional volumes for ImagePuller pods
##
@@ -1419,7 +1415,7 @@ singleuser:
##
command: []
## @param singleuser.extraEnvVars Extra environment variables that should be set for the user pods
- ## ref: https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html#singleuser-extraenv
+ ## ref: https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html#singleuser-extraenv
##
extraEnvVars: []
## @param singleuser.containerPort Single User container port
@@ -1604,7 +1600,7 @@ postgresql:
enabled: true
auth:
username: bn_jupyterhub
- password: ""
+ password: "1234567890"
database: bitnami_jupyterhub
existingSecret: ""
architecture: standalone
Hi, I think the reason is that in the recent version of jupytherhub some anti-xsrf logic was added, and ingress is a reverse proxy so jupyterhub see the request coming from the proxy and not form the user. Could try without ingress ?
I am using also this helm chart with the same ingress and not experience any issue. Jupyterhub version is 4.0.1
That uses version 4.0.1, and this chart is using 4.0.0. We have released 4.0.1 too, but it is not updated in the chart yet. Do you mind testing 4.0.1 ? In any case, could you try without ingress ? (just to check)
Sorry for the late reply. With NodePort seems ok. I will try to test with 4.0.1 also.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Name and Version
bitnami/jupyterhub 4.1.4
What architecture are you using?
amd64
What steps will reproduce the bug?
Are you using any custom parameters or values?
Database password set, as well as nginx ingress with let's encrypt certificate issuer (ingress annotations). Both work as expected.
What is the expected behavior?
The admin user should be able to sign up and then login.
What do you see instead?
403 : Forbidden '_xsrf' argument missing from POST
Additional information
Dummy authentication works as expected.