bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.94k stars 9.18k forks source link

mkdir: cannot create directory ‘/data/pgdata’: Permission denied #3471

Closed voipas closed 4 years ago

voipas commented 4 years ago

Which chart: postgresql-9.1.1

Describe the bug I'm creating o K3S homelab cluster on Raspberry PI and I want to install Keycloak from codecentric and the dependency is PostgresSQL from bitnami. Unfortunatelly deployment fails withpostgresql deployment - CrashLoopBackOff and I can't deploy keycloak.

To Reproduce Steps to reproduce the behavior:

  1. Create namespace security
  2. Create PV for keycloak postgresql
  3. Create PVC for keycloak postgresql in name space security
  4. Make keycloak values changes (images for arm, ingress and persistent stuff)
  5. Deploy - helm install keycloak codecentric/keycloak --values keycloak.values.yml --namespace security

Expected behavior Sucessfully deploy Postgresql using existing PV and PVC

Version of Helm and Kubernetes:

version.BuildInfo{Version:"v3.3.0-rc.2", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.6"}
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6+k3s1", GitCommit:"6f56fa1d68a5a48b8b6fdefa8eb7ead2015a4b3a", GitTreeState:"clean", BuildDate:"2020-07-16T20:44:01Z", GoVersion:"go1.13.11", Compiler:"gc", Platform:"linux/arm"}

Name Space creation

kubectl create namespace security

Persistent Volume YAML

# keycloak.persistentvolume.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: "keycloak-ssd"
  labels:
    type: "local"
spec:
  storageClassName: "manual"
  capacity:
    storage: "2Gi"
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/ssd/keycloak-ps"
---

Ownership of location

$ ls -l /mnt/ssd/                         total 20
drwxr-xr-x 3 pi pi  4096 Aug 19 20:24 keycloak-ps

ls -l /mnt/ssd/keycloak-ps/
total 4
drwxr-xr-x 3 pi pi 4096 Aug 19 20:24 data

$ ls -l /mnt/ssd/keycloak-ps/data/
total 4
drwxr-xr-x 2 pi pi 4096 Aug 19 20:24 pgdata

PV and PVC Status

$ kubectl get pvc -n security
NAME           STATUS   VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE
keycloak-ssd   Bound    keycloak-ssd   2Gi        RWO            manual         27h

$ kubectl get pv -o wide
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE   VOLUMEMODE
keycloak-ssd   2Gi        RWO            Retain           Bound    security/keycloak-ssd   manual                  27h   Filesystem

Keycloak Values Yaml

# Optionally override the fully qualified name
fullnameOverride: ""

# Optionally override the name
nameOverride: ""

# The number of replicas to create
replicas: 1

image:
  # The Keycloak image repository
  repository: richieroldan/keycloak
  # Overrides the Keycloak image tag whose default is the chart version
  tag: "v9.0.0-arm"
  # The Keycloak image pull policy
  pullPolicy: IfNotPresent

# Image pull secrets for the Pod
imagePullSecrets:
  - name: myRegistrKeySecretName

# Mapping between IPs and hostnames that will be injected as entries in the Pod's hosts files
hostAliases: []
# - ip: "1.2.3.4"
#   hostnames:
#     - "my.host.com"

# Indicates whether information about services should be injected into Pod's environment variables, matching the syntax of Docker links
enableServiceLinks: true

# Pod management policy. One of `Parallel` or `OrderedReady`
podManagementPolicy: Parallel

# Pod restart policy. One of `Always`, `OnFailure`, or `Never`
restartPolicy: Always

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
  # Additional annotations for the ServiceAccount
  annotations: {}
  # Additional labels for the ServiceAccount
  labels: {}
  # Image pull secrets that are attached to the ServiceAccount
  imagePullSecrets: []

rbac:
  create: false
  rules: []
  # RBAC rules for KUBE_PING
  #  - apiGroups:
  #      - ""
  #    resources:
  #      - pods
  #    verbs:
  #      - get
  #      - list

# SecurityContext for the entire Pod. Every container running in the Pod will inherit this SecurityContext. This might be relevant when other components of the environment inject additional containers into running Pods (service meshes are the most prominent example for this)
podSecurityContext:
  fsGroup: 1000

# SecurityContext for the Keycloak container
securityContext:
  runAsUser: 1000
  runAsNonRoot: true

# Additional init containers, e. g. for providing custom themes
extraInitContainers: ""

# Additional sidecar containers, e. g. for a database proxy, such as Google's cloudsql-proxy
extraContainers: ""

# Lifecycle hooks for the Keycloak container
lifecycleHooks: |
#  postStart:
#    exec:
#      command:
#        - /bin/sh
#        - -c
#        - ls

# Termination grace period in seconds for Keycloak shutdown. Clusters with a large cache might need to extend this to give Infinispan more time to rebalance
terminationGracePeriodSeconds: 60

# The internal Kubernetes cluster domain
clusterDomain: homelab.mydomain.com

## Overrides the default entrypoint of the Keycloak container
command: []

## Overrides the default args for the Keycloak container
args: []

# Additional environment variables for Keycloak
extraEnv: ""
  # - name: KEYCLOAK_LOGLEVEL
  #   value: DEBUG
  # - name: WILDFLY_LOGLEVEL
  #   value: DEBUG
  # - name: CACHE_OWNERS_COUNT
  #   value: "2"
  # - name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
  #   value: "2"

# Additional environment variables for Keycloak mapped from Secret or ConfigMap
extraEnvFrom: ""

#  Pod priority class name
priorityClassName: ""

# Pod affinity
affinity: |
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            {{- include "keycloak.selectorLabels" . | nindent 10 }}
          matchExpressions:
            - key: app.kubernetes.io/component
              operator: NotIn
              values:
                - test
        topologyKey: kubernetes.io/hostname
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              {{- include "keycloak.selectorLabels" . | nindent 12 }}
            matchExpressions:
              - key: app.kubernetes.io/component
                operator: NotIn
                values:
                  - test
          topologyKey: failure-domain.beta.kubernetes.io/zone

# Node labels for Pod assignment
nodeSelector: {}

# Node taints to tolerate
tolerations: []

# Additional Pod labels
podLabels: {}

# Additional Pod annotations
podAnnotations: {}

# Liveness probe configuration
livenessProbe: |
  httpGet:
    path: /auth/
    port: http
  initialDelaySeconds: 300
  timeoutSeconds: 5

# Readiness probe configuration
readinessProbe: |
  httpGet:
    path: /auth/realms/master
    port: http
  initialDelaySeconds: 30
  timeoutSeconds: 1

# Pod resource requests and limits
resources: {}
  # requests:
  #   cpu: "500m"
  #   memory: "1024Mi"
  # limits:
  #   cpu: "500m"
  #   memory: "1024Mi"

# Startup scripts to run before Keycloak starts up
startupScripts:
  # WildFly CLI script for configuring the node-identifier
  keycloak.cli: |
    {{- .Files.Get "scripts/keycloak.cli" }}
  # mystartup.sh: |
  #   #!/bin/sh
  #
  # echo 'Hello from my custom startup script!'

# Add additional volumes, e. g. for custom themes
extraVolumes: ""

# Add additional volumes mounts, e. g. for custom themes
extraVolumeMounts: ""

# Add additional ports, e. g. for admin console or exposing JGroups ports
extraPorts: []

# Pod disruption budget
podDisruptionBudget: {}
#  maxUnavailable: 1
#  minAvailable: 1

# Annotations for the StatefulSet
statefulsetAnnotations: {}

# Additional labels for the StatefulSet
statefulsetLabels: {}

# Configuration for secrets that should be created
secrets: {}
  # mysecret:
  #   annotations: {}
  #   labels: {}
  #   stringData: {}
  #   data: {}

service:
  # Annotations for headless and HTTP Services
  annotations: {}
  # Additional labels for headless and HTTP Services
  labels: {}
  # key: value
  # The Service type
  type: ClusterIP
  # Optional IP for the load balancer. Used for services of type LoadBalancer only
  loadBalancerIP: ""
  # The http Service port
  httpPort: 80
  # The HTTP Service node port if type is NodePort
  httpNodePort: null
  # The HTTPS Service port
  httpsPort: 443
  # The HTTPS Service node port if type is NodePort
  httpsNodePort: null
  # The WildFly management Service port
  httpManagementPort: 9990
  # The WildFly management Service node port if type is NodePort
  httpManagementNodePort: null
  # Additional Service ports, e. g. for custom admin console
  extraPorts: []

ingress:
  # If `true`, an Ingress is created
  enabled: true
  # The Service port targeted by the Ingress
  servicePort: https
  # Ingress annotations
  annotations:
    kubernetes.io/ingress.class: "nginx"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  # Additional Ingress labels
  labels: {}
   # List of rules for the Ingress
  rules:
    -
      # Ingress host
      host: idm.mydomain.com
      # Paths for the host
      paths:
        - /
  # TLS configuration
  tls:
    - hosts:
        - idm.mydomain.com
      secretName: idm-mydomain-com-tls

route:
  # If `true`, an OpenShift Route is created
  enabled: false
  # Path for the Route
  path: /
  # Route annotations
  annotations: {}
  # Additional Route labels
  labels: {}
  # Host name for the Route
  host: ""
  # TLS configuration
  tls:
    # If `true`, TLS is enabled for the Route
    enabled: true
    # Insecure edge termination policy of the Route. Can be `None`, `Redirect`, or `Allow`
    insecureEdgeTerminationPolicy: Redirect
    # TLS termination of the route. Can be `edge`, `passthrough`, or `reencrypt`
    termination: edge

pgchecker:
  image:
    # Docker image used to check Postgresql readiness at startup
    repository: docker.io/busybox
    # Image tag for the pgchecker image
    tag: 1.32
    # Image pull policy for the pgchecker image
    pullPolicy: IfNotPresent
  # SecurityContext for the pgchecker container
  securityContext:
    allowPrivilegeEscalation: false
    runAsUser: 1000
    runAsGroup: 1000
    runAsNonRoot: true
  # Resource requests and limits for the pgchecker container
  resources:
    requests:
      cpu: "10m"
      memory: "16Mi"
    limits:
      cpu: "10m"
      memory: "16Mi"

postgresql:
  # If `true`, the Postgresql dependency is enabled
  enabled: true
  postgresqlDataDir: /data/pgdata
  # PostgreSQL User to create
  postgresqlUsername: keycloak
  # PostgreSQL Password for the new user
  postgresqlPassword: keycloak
  # PostgreSQL Database to create
  postgresqlDatabase: keycloak
  image:
      registry: docker.io
      repository: "postgres"
      tag: "9.6.19"
  pullPolicy: IfNotPresent
  # Persistent Volume Storage configuration
  persistence:
      enabled: true # Change to true
      mountPath: /data/
      existingClaim: "keycloak-ssd" # Persistent Volume Claim created earlier
      #accessMode: ReadWriteOnce
      #size: "2Gi"
  #volumePermissions:
        #enabled: true

serviceMonitor:
  # If `true`, a ServiceMonitor resource for the prometheus-operator is created
  enabled: false
  # Optionally sets a target namespace in which to deploy the ServiceMonitor resource
  namespace: ""
  # Annotations for the ServiceMonitor
  annotations: {}
  # Additional labels for the ServiceMonitor
  labels: {}
  # Interval at which Prometheus scrapes metrics
  interval: 10s
  # Timeout for scraping
  scrapeTimeout: 10s
  # The path at which metrics are served
  path: /metrics
  # The Service port at which metrics are served
  port: http-management

prometheusRule:
  # If `true`, a PrometheusRule resource for the prometheus-operator is created
  enabled: false
  # Annotations for the PrometheusRule
  annotations: {}
  # Additional labels for the PrometheusRule
  labels: {}
  # List of rules for Prometheus
  rules: []
  # - alert: keycloak-IngressHigh5xxRate
  #   annotations:
  #     message: The percentage of 5xx errors for keycloak over the last 5 minutes is over 1%.
  #   expr: |
  #     (
  #       sum(
  #         rate(
  #           nginx_ingress_controller_response_duration_seconds_count{exported_namespace="mynamespace",ingress="mynamespace-keycloak",status=~"5[0-9]{2}"}[1m]
  #         )
  #       )
  #       /
  #       sum(
  #         rate(
  #           nginx_ingress_controller_response_duration_seconds_count{exported_namespace="mynamespace",ingress="mynamespace-keycloak"}[1m]
  #         )
  #       )
  #     ) * 100 > 1
  #   for: 5m
  #   labels:
  #     severity: warning

test:
  # If `true`, test resources are created
  enabled: false
  image:
    # The image for the test Pod
    repository: docker.io/unguiculus/docker-python3-phantomjs-selenium
    # The tag for the test Pod image
    tag: v1
    # The image pull policy for the test Pod image
    pullPolicy: IfNotPresent
  # SecurityContext for the entire test Pod
  podSecurityContext:
    fsGroup: 1000
  # SecurityContext for the test container
  securityContext:
    runAsUser: 1000
    runAsNonRoot: true

Describe Pod

$ kubectl describe pods keycloak-postgresql-0 -n security
Name:         keycloak-postgresql-0
Namespace:    security
Priority:     0
Node:         k8s-slave-03/192.168.0.23
Start Time:   Wed, 19 Aug 2020 17:26:38 +0000
Labels:       app.kubernetes.io/instance=keycloak
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=postgresql
              controller-revision-hash=keycloak-postgresql-676fc49df4
              helm.sh/chart=postgresql-9.1.1
              role=master
              statefulset.kubernetes.io/pod-name=keycloak-postgresql-0
Annotations:  <none>
Status:       Running
IP:           10.42.2.3
IPs:
  IP:           10.42.2.3
Controlled By:  StatefulSet/keycloak-postgresql
Containers:
  keycloak-postgresql:
    Container ID:   containerd://d7a4b7ca155ca948d6d8e28a90728d56f11884126344d34d2dd0e5c121a10d86
    Image:          docker.io/postgres:9.6.19
    Image ID:       docker.io/library/postgres@sha256:9aa0b86ae3be8de6f922441b913e8914e840c652b6880a642f42f98f5e2aaeaf
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 19 Aug 2020 17:27:36 +0000
      Finished:     Wed, 19 Aug 2020 17:27:36 +0000
    Ready:          False
    Restart Count:  2
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [/bin/sh -c exec pg_isready -U "keycloak" -d "dbname=keycloak" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/sh -c -e exec pg_isready -U "keycloak" -d "dbname=keycloak" -h 127.0.0.1 -p 5432
] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:           false
      POSTGRESQL_PORT_NUMBER:  5432
      POSTGRESQL_VOLUME_DIR:   /data/
      PGDATA:                  /data/pgdata
      POSTGRES_USER:           keycloak
      POSTGRES_PASSWORD:       <set to the key 'postgresql-password' in secret 'keycloak-postgresql'>  Optional: false
      POSTGRES_DB:             keycloak
      POSTGRESQL_ENABLE_LDAP:  no
      POSTGRESQL_ENABLE_TLS:   no
    Mounts:
      /data/ from data (rw)
      /dev/shm from dshm (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7nd68 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  dshm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  1Gi
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  keycloak-ssd
    ReadOnly:   false
  default-token-7nd68:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7nd68
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From                   Message
  ----     ------     ----               ----                   -------
  Normal   Scheduled  <unknown>          default-scheduler      Successfully assigned security/keycloak-postgresql-0 to k8s-slave-03
  Normal   Pulling    89s                kubelet, k8s-slave-03  Pulling image "docker.io/postgres:9.6.19"
  Normal   Pulled     58s                kubelet, k8s-slave-03  Successfully pulled image "docker.io/postgres:9.6.19"
  Normal   Created    34s (x3 over 53s)  kubelet, k8s-slave-03  Created container keycloak-postgresql
  Normal   Started    33s (x3 over 53s)  kubelet, k8s-slave-03  Started container keycloak-postgresql
  Warning  BackOff    15s (x6 over 52s)  kubelet, k8s-slave-03  Back-off restarting failed container
  Normal   Pulled     0s (x3 over 53s)   kubelet, k8s-slave-03  Container image "docker.io/postgres:9.6.19" already present on machine

Pod Logs

$ kubectl logs keycloak-postgresql-0 -n security
mkdir: cannot create directory ‘/data/pgdata’: Permission denied
dani8art commented 4 years ago

Hi @voipas thanks for opening this issue.

bitnami/postgresql is a non-root image so it needs some adjustments to be able to write your volume since your volume owner is pi:pi and the container expects to be pi:root at least. We include an init container in our chart to avoid this kind of error but you need to enable it please could you add the following to your values.yaml?

postgresql:
  volumePermissions:
    enabled: true
voipas commented 4 years ago

Hi @dani8art , thanks for response, I idid this , but still have problems...

values.yaml

postgresql:
  # If `true`, the Postgresql dependency is enabled
  enabled: true
  postgresqlDataDir: /data/pgdata
  # PostgreSQL User to create
  postgresqlUsername: keycloak
  # PostgreSQL Password for the new user
  postgresqlPassword: keycloak
  # PostgreSQL Database to create
  postgresqlDatabase: keycloak
  image:
      registry: docker.io
      repository: "postgres"
      tag: "9.6.19"
  pullPolicy: IfNotPresent
  # Persistent Volume Storage configuration
  persistence:
      enabled: true # Change to true
      mountPath: /data/
      existingClaim: "keycloak-ssd" # Persistent Volume Claim created earlier
      #accessMode: ReadWriteOnce
      #size: "2Gi"
  volumePermissions:
        enabled: true

Pod status

$ kubectl get pods -n security
NAME                    READY   STATUS                  RESTARTS   AGE
keycloak-0              0/1     Init:0/1                0          45s
keycloak-postgresql-0   0/1     Init:CrashLoopBackOff   2          45s

Pod logs

$ kubectl logs keycloak-postgresql-0 -n security
Error from server (BadRequest): container "keycloak-postgresql" in pod "keycloak-postgresql-0" is waiting to start: PodInitializing

Pod describe

 $ kubectl describe pod keycloak-postgresql-0 -n security
Name:         keycloak-postgresql-0
Namespace:    security
Priority:     0
Node:         k8s-slave-03/192.168.0.23
Start Time:   Thu, 20 Aug 2020 17:24:41 +0000
Labels:       app.kubernetes.io/instance=keycloak
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=postgresql
              controller-revision-hash=keycloak-postgresql-5b74cf8d59
              helm.sh/chart=postgresql-9.1.1
              role=master
              statefulset.kubernetes.io/pod-name=keycloak-postgresql-0
Annotations:  <none>
Status:       Pending
IP:           10.42.2.7
IPs:
  IP:           10.42.2.7
Controlled By:  StatefulSet/keycloak-postgresql
Init Containers:
  init-chmod-data:
    Container ID:  containerd://20d164d0f310d61640978b144a7442eeb7c1abd533f9c73ec196bac4ecf01823
    Image:         docker.io/bitnami/minideb:buster
    Image ID:      docker.io/bitnami/minideb@sha256:8a773f4021425654cbb6e31176098632370d1c7eac221cef643476e10d5a3af2
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -cx
      mkdir -p /data//data
      chmod 700 /data//data
      find /data/ -mindepth 1 -maxdepth 1 -not -name "conf" -not -name ".snapshot" -not -name "lost+found" | \
        xargs chown -R 1001:1001
      chmod -R 777 /dev/shm

    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 20 Aug 2020 17:27:42 +0000
      Finished:     Thu, 20 Aug 2020 17:27:42 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /data/ from data (rw)
      /dev/shm from dshm (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7nd68 (ro)
Containers:
  keycloak-postgresql:
    Container ID:
    Image:          docker.io/postgres:9.6.19
    Image ID:
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [/bin/sh -c exec pg_isready -U "keycloak" -d "dbname=keycloak" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/sh -c -e exec pg_isready -U "keycloak" -d "dbname=keycloak" -h 127.0.0.1 -p 5432
] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:           false
      POSTGRESQL_PORT_NUMBER:  5432
      POSTGRESQL_VOLUME_DIR:   /data/
      PGDATA:                  /data/pgdata
      POSTGRES_USER:           keycloak
      POSTGRES_PASSWORD:       <set to the key 'postgresql-password' in secret 'keycloak-postgresql'>  Optional: false
      POSTGRES_DB:             keycloak
      POSTGRESQL_ENABLE_LDAP:  no
      POSTGRESQL_ENABLE_TLS:   no
    Mounts:
      /data/ from data (rw)
      /dev/shm from dshm (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7nd68 (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  dshm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  1Gi
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  keycloak-ssd
    ReadOnly:   false
  default-token-7nd68:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7nd68
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From                   Message
  ----     ------     ----                   ----                   -------
  Normal   Scheduled  <unknown>              default-scheduler      Successfully assigned security/keycloak-postgresql-0 to k8s-slave-03
  Normal   Created    3m49s (x4 over 4m36s)  kubelet, k8s-slave-03  Created container init-chmod-data
  Normal   Started    3m49s (x4 over 4m35s)  kubelet, k8s-slave-03  Started container init-chmod-data
  Warning  BackOff    3m21s (x7 over 4m33s)  kubelet, k8s-slave-03  Back-off restarting failed container
  Normal   Pulling    3m6s (x5 over 4m37s)   kubelet, k8s-slave-03  Pulling image "docker.io/bitnami/minideb:buster"
  Normal   Pulled     3m4s (x5 over 4m36s)   kubelet, k8s-slave-03  Successfully pulled image "docker.io/bitnami/minideb:buster"
dani8art commented 4 years ago

It seems like init-chmod-data is not working properly could you add its logs, please?

$ kubectl logs keycloak-postgresql-0 init-chmod-data
voipas commented 4 years ago

Hey, here is an outcome:

 $ kubectl logs keycloak-postgresql-0 init-chmod-data -n security
standard_init_linux.go:211: exec user process caused "exec format error"
dani8art commented 4 years ago

Try to remove the mountPath or the / At the end

  persistence:
      enabled: true # Change to true
      mountPath: /data/
voipas commented 4 years ago

Sorry for late response. I have still the same issues, same errors. I wanted to double check, if on Master server and I'm using NFS mountpoint and from my previuos messages I have created PV and PVC:

hostPath:
    path: "/mnt/ssd/keycloak-ps"

so when I try to install Posgresql - so which mount path I should use? I tried mountPath=/mnt/ssd/keycloak-ps

Name:         postgresql-postgresql-0
Namespace:    security
Priority:     0
Node:         k8s-slave-02/192.168.0.22
Start Time:   Thu, 27 Aug 2020 02:52:34 +0000
Labels:       app.kubernetes.io/instance=postgresql
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=postgresql
              controller-revision-hash=postgresql-postgresql-d64999946
              helm.sh/chart=postgresql-9.3.2
              role=master
              statefulset.kubernetes.io/pod-name=postgresql-postgresql-0
Annotations:  <none>
Status:       Pending
IP:           10.42.3.9
IPs:
  IP:           10.42.3.9
Controlled By:  StatefulSet/postgresql-postgresql
Init Containers:
  init-chmod-data:
    Container ID:  containerd://148c5695f56419fce24a7cfc9eb236531b7e9cf331339cbf898c8c2071420c1a
    Image:         docker.io/bitnami/minideb:buster
    Image ID:      docker.io/bitnami/minideb@sha256:8a773f4021425654cbb6e31176098632370d1c7eac221cef643476e10d5a3af2
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -cx
      mkdir -p /mnt/ssd/keycloak-ps/data
      chmod 700 /mnt/ssd/keycloak-ps/data
      find /mnt/ssd/keycloak-ps -mindepth 1 -maxdepth 1 -not -name "conf" -not -name ".snapshot" -not -name "lost+found" | \
        xargs chown -R 1001:1001
      chmod -R 777 /dev/shm

    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 27 Aug 2020 02:52:39 +0000
      Finished:     Thu, 27 Aug 2020 02:52:39 +0000
    Ready:          False
    Restart Count:  1
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /dev/shm from dshm (rw)
      /mnt/ssd/keycloak-ps from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j9xws (ro)
Containers:
  postgresql:
    Container ID:
    Image:          docker.io/postgres:9.6.19
    Image ID:
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [/bin/sh -c exec pg_isready -U "keycloak" -d "dbname=keycloak" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/sh -c -e exec pg_isready -U "keycloak" -d "dbname=keycloak" -h 127.0.0.1 -p 5432
] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:           true
      POSTGRESQL_PORT_NUMBER:  5432
      POSTGRESQL_VOLUME_DIR:   /mnt/ssd/keycloak-ps
      PGDATA:                  /data/pgdata
      POSTGRES_USER:           keycloak
      POSTGRES_PASSWORD:       <set to the key 'postgresql-password' in secret 'postgresql'>  Optional: false
      POSTGRES_DB:             keycloak
      POSTGRESQL_ENABLE_LDAP:  no
      POSTGRESQL_ENABLE_TLS:   no
    Mounts:
      /dev/shm from dshm (rw)
      /mnt/ssd/keycloak-ps from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j9xws (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  dshm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  1Gi
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  keycloak-ssd
    ReadOnly:   false
  default-token-j9xws:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-j9xws
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From                   Message
  ----     ------     ----               ----                   -------
  Normal   Scheduled  <unknown>          default-scheduler      Successfully assigned security/postgresql-postgresql-0 to k8s-slave-02
  Warning  BackOff    15s (x2 over 16s)  kubelet, k8s-slave-02  Back-off restarting failed container
  Normal   Pulling    3s (x3 over 21s)   kubelet, k8s-slave-02  Pulling image "docker.io/bitnami/minideb:buster"
  Normal   Pulled     1s (x3 over 19s)   kubelet, k8s-slave-02  Successfully pulled image "docker.io/bitnami/minideb:buster"
  Normal   Created    1s (x3 over 19s)   kubelet, k8s-slave-02  Created container init-chmod-data
  Normal   Started    1s (x3 over 19s)   kubelet, k8s-slave-02  Started container init-chmod-data
voipas commented 4 years ago

OK, I found one of the problem , as I'm using raspberry pi, i had wrong minideb image, so now my command looks like this (notem now I'm playing only with postgresql):

helm install postgresql \
--set image.registry=docker.io,\
image.repository=postgres,\
image.tag="9.6.19",\
postgresqlDatabase=keycloak,\
postgresqlUsername=keycloak,\
postgresqlPassword=keycloak,\
persistence.enabled=true,\
persistence.existingClaim=keycloak-ssd,\
persistence.mountPath=/mnt/ssd/keycloak-ps,\
postgresqlDataDir=/data/pgdata,\
persistence.accessModes=ReadWriteOnce,\
persistence.size="2Gi",\
volumePermissions.enabled=true,\
volumePermissions.image.repository=yeoncomi/minideb-armv7l,\
volumePermissions.image.tag="latest",\
securityContext.fsGroup=1000,\
securityContext.runAsUser=1000 \
bitnami/postgresql \
--namespace security

General logs

keycloak $ kubectl logs postgresql-postgresql-0 -n security
mkdir: cannot create directory ‘/data’: Permission denied

Init Chmod data logs

keycloak $ kubectl logs postgresql-postgresql-0 init-chmod-data -n security
+ mkdir -p /mnt/ssd/keycloak-ps/data
+ chmod 700 /mnt/ssd/keycloak-ps/data
+ find /mnt/ssd/keycloak-ps -mindepth 1 -maxdepth 1 -not -name conf -not -name .snapshot -not -name lost+found
+ xargs chown -R 1000:1000
+ chmod -R 777 /dev/shm

Pod Describe

keycloak $ kubectl describe pod postgresql-postgresql-0 -n security
Name:         postgresql-postgresql-0
Namespace:    security
Priority:     0
Node:         k8s-slave-02/192.168.0.22
Start Time:   Thu, 27 Aug 2020 03:25:54 +0000
Labels:       app.kubernetes.io/instance=postgresql
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=postgresql
              controller-revision-hash=postgresql-postgresql-69bb5b6484
              helm.sh/chart=postgresql-9.3.2
              role=master
              statefulset.kubernetes.io/pod-name=postgresql-postgresql-0
Annotations:  <none>
Status:       Running
IP:           10.42.3.12
IPs:
  IP:           10.42.3.12
Controlled By:  StatefulSet/postgresql-postgresql
Init Containers:
  init-chmod-data:
    Container ID:  containerd://bb16f3d03840c33a3d73a6386ce2964cc1cbf3c053e2842cb5249c1551c165c4
    Image:         docker.io/yeoncomi/minideb-armv7l:latest
    Image ID:      docker.io/yeoncomi/minideb-armv7l@sha256:1d346e37ca721958c44ec7557b16e7fa0554003a4dfd7659c8f642728ae895c3
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -cx
      mkdir -p /mnt/ssd/keycloak-ps/data
      chmod 700 /mnt/ssd/keycloak-ps/data
      find /mnt/ssd/keycloak-ps -mindepth 1 -maxdepth 1 -not -name "conf" -not -name ".snapshot" -not -name "lost+found" | \
        xargs chown -R 1000:1000
      chmod -R 777 /dev/shm

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 27 Aug 2020 03:26:53 +0000
      Finished:     Thu, 27 Aug 2020 03:26:53 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /dev/shm from dshm (rw)
      /mnt/ssd/keycloak-ps from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j9xws (ro)
Containers:
  postgresql:
    Container ID:   containerd://cb141ade3b73985dfd570ef10ac1d786a3379e9faa79fe7de4c806be99504da6
    Image:          docker.io/postgres:9.6.19
    Image ID:       docker.io/library/postgres@sha256:9aa0b86ae3be8de6f922441b913e8914e840c652b6880a642f42f98f5e2aaeaf
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 27 Aug 2020 03:33:31 +0000
      Finished:     Thu, 27 Aug 2020 03:33:31 +0000
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [/bin/sh -c exec pg_isready -U "keycloak" -d "dbname=keycloak" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/sh -c -e exec pg_isready -U "keycloak" -d "dbname=keycloak" -h 127.0.0.1 -p 5432
] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:           false
      POSTGRESQL_PORT_NUMBER:  5432
      POSTGRESQL_VOLUME_DIR:   /mnt/ssd/keycloak-ps
      PGDATA:                  /data/pgdata
      POSTGRES_USER:           keycloak
      POSTGRES_PASSWORD:       <set to the key 'postgresql-password' in secret 'postgresql'>  Optional: false
      POSTGRES_DB:             keycloak
      POSTGRESQL_ENABLE_LDAP:  no
      POSTGRESQL_ENABLE_TLS:   no
    Mounts:
      /dev/shm from dshm (rw)
      /mnt/ssd/keycloak-ps from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j9xws (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  dshm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  1Gi
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  keycloak-ssd
    ReadOnly:   false
  default-token-j9xws:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-j9xws
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                   Message
  ----     ------     ----                    ----                   -------
  Normal   Scheduled  <unknown>               default-scheduler      Successfully assigned security/postgresql-postgresql-0 to k8s-slave-02
  Normal   Pulling    7m53s                   kubelet, k8s-slave-02  Pulling image "docker.io/yeoncomi/minideb-armv7l:latest"
  Normal   Pulled     7m                      kubelet, k8s-slave-02  Successfully pulled image "docker.io/yeoncomi/minideb-armv7l:latest"
  Normal   Created    6m56s                   kubelet, k8s-slave-02  Created container init-chmod-data
  Normal   Started    6m56s                   kubelet, k8s-slave-02  Started container init-chmod-data
  Normal   Pulling    6m55s                   kubelet, k8s-slave-02  Pulling image "docker.io/postgres:9.6.19"
  Normal   Pulled     6m24s                   kubelet, k8s-slave-02  Successfully pulled image "docker.io/postgres:9.6.19"
  Normal   Pulled     5m23s (x3 over 6m20s)   kubelet, k8s-slave-02  Container image "docker.io/postgres:9.6.19" already present on machine
  Normal   Created    5m23s (x4 over 6m20s)   kubelet, k8s-slave-02  Created container postgresql
  Normal   Started    5m22s (x4 over 6m20s)   kubelet, k8s-slave-02  Started container postgresql
  Warning  BackOff    2m49s (x20 over 6m19s)  kubelet, k8s-slave-02  Back-off restarting failed container
voipas commented 4 years ago

OK, I solved the problem, so deployment looks like:

helm install postgresql \
--set image.registry=docker.io,\
image.repository=postgres,\
image.tag="9.6.19",\
postgresqlDatabase=keycloak,\
postgresqlUsername=keycloak,\
postgresqlPassword=keycloak,\
persistence.enabled=true,\
persistence.existingClaim=keycloak-ssd,\
persistence.mountPath=/data,\
postgresqlDataDir=/data/pgdata,\
persistence.accessModes=ReadWriteOnce,\
persistence.size="2Gi",\
volumePermissions.enabled=true,\
volumePermissions.image.repository=yeoncomi/minideb-armv7l,\
volumePermissions.image.tag="latest",\
securityContext.fsGroup=1000,\
securityContext.runAsUser=1000 \
bitnami/postgresql \
--namespace security
javsalgar commented 4 years ago

Hi,

Good to know that it was solved. If you come across other issues, do not hesitate to open a new ticket :)

stale[bot] commented 4 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

stale[bot] commented 4 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.