Closed djjudas21 closed 3 years ago
Hi,
Could you share the values that you changed from the default?
Sure @javsalgar. I probably should have added this to begin with. I've redacted a couple of sensitive values but everything else is the same.
hostAliases:
## Necessary for apache-exporter to work
##
- ip: "127.0.0.1"
hostnames:
- "status.localhost"
## @param replicaCount Number of replicas (requires ReadWriteMany PVC support)
##
replicaCount: 1
## @param owncloudSkipInstall Skip ownCloud installation wizard. Useful for migrations and restoring from SQL dump
## ref: https://github.com/bitnami/bitnami-docker-owncloud#configuration
##
owncloudSkipInstall: true
## @param owncloudHost ownCloud host to create application URLs (when ingress, it will be ignored)
## ref: https://github.com/bitnami/bitnami-docker-owncloud#configuration
##
owncloudHost: ""
## @param owncloudUsername User of the application
## ref: https://github.com/bitnami/bitnami-docker-owncloud#configuration
##
owncloudUsername: user
## @param owncloudPassword Application password
## Defaults to a random 10-character alphanumeric string if not set
## ref: https://github.com/bitnami/bitnami-docker-owncloud#configuration
##
owncloudPassword: "password"
## @param owncloudEmail Admin email
## ref: https://github.com/bitnami/bitnami-docker-owncloud#configuration
##
owncloudEmail: me@example.com
updateStrategy:
type: RollingUpdate
## @param extraEnvVars An array to add extra env vars
## For example:
## - name: BEARER_AUTH
## value: true
##
tolerations: []
## @param existingSecret Name of a secret with the application password
##
existingSecret: ""
## SMTP mail delivery configuration
## ref: https://github.com/bitnami/bitnami-docker-owncloud/#smtp-configuration
## @param smtpHost SMTP host
## @param smtpPort SMTP port
## @param smtpUser SMTP user
## @param smtpPassword SMTP password
## @param smtpProtocol SMTP Protocol (options: ssl,tls, nil)
##
smtpHost: ""
smtpPort: ""
smtpUser: ""
smtpPassword: ""
smtpProtocol: ""
## @param containerPorts.http Sets HTTP port inside NGINX container
## @param containerPorts.https Sets HTTPS port inside NGINX container
##
containerPorts:
http: 8080
https: 8443
## @param sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/user-guide/services/
##
sessionAffinity: "None"
## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## Allowed values: soft, hard
##
nodeAffinityPreset:
## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
##
type: ""
## @param nodeAffinityPreset.key Node label key to match Ignored if `affinity` is set.
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set.
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## @param affinity Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## @param nodeSelector Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param resources Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## e.g:
## requests:
## memory: 512Mi
## cpu: 300m
##
resources: {}
## Configure Pods Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enable ownCloud pods' Security Context
## @param podSecurityContext.fsGroup ownCloud pods' group ID
##
podSecurityContext:
enabled: true
fsGroup: 1001
## Configure Container Security Context (only main container)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param containerSecurityContext.enabled Enable ownCloud containers' Security Context
## @param containerSecurityContext.runAsUser ownCloud containers' Security Context
##
containerSecurityContext:
enabled: true
runAsUser: 1001
startupProbe:
enabled: true
## @param podAnnotations Pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param podLabels Pod extra labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @section Database parameters
## MariaDB chart configuration
## https://github.com/bitnami/charts/blob/master/bitnami/mariadb/values.yaml
##
mariadb:
## @param mariadb.enabled Whether to deploy a mariadb server to satisfy the applications database requirements
## To use an external database set this to false and configure the externalDatabase parameters
##
enabled: true
## @param mariadb.architecture MariaDB architecture. Allowed values: `standalone` or `replication`
##
architecture: standalone
## MariaDB Authentication parameters
##
auth:
## @param mariadb.auth.rootPassword Password for the MariaDB `root` user
## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-the-root-password-on-first-run
##
rootPassword: "owncloud"
## @param mariadb.auth.database Database name to create
## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-on-first-run
##
database: owncloud
## @param mariadb.auth.username Database user to create
## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-user-on-first-run
##
username: owncloud
## @param mariadb.auth.password Password for the database
##
password: "owncloud"
primary:
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
storageClass: "freenas-iscsi-csi-ssd"
accessModes:
- ReadWriteOnce
size: 8Gi
existingClaim: ""
## @section Persistence parameters
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
## @param persistence.enabled Enable persistence using PVC
##
enabled: true
## @param persistence.storageClass PVC Storage Class for ownCloud volume
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "freenas-nfs-csi"
## @param persistence.accessMode PVC Access Mode for ownCloud volume
##
accessMode: ReadWriteMany
## @param persistence.size PVC Storage Request for ownCloud volume
##
size: 8Gi
## @param persistence.existingClaim An Existing PVC name for ownCloud volume
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
##
existingClaim: ""
## @section Volume Permissions parameters
## Init containers parameters:
## volumePermissions: Change the owner and group of the persistent volume mountpoint to runAsUser:fsGroup values from the securityContext section.
##
volumePermissions:
## @param volumePermissions.enabled Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work)
##
enabled: false
## Init containers' resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## We usually recommend not to specify default resources and to leave this as a conscious
## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
## @param volumePermissions.resources.limits The resources limits for the container
## @param volumePermissions.resources.requests The requested resources for the container
##
resources:
## Example:
## limits:
## cpu: 100m
## memory: 128Mi
limits: {}
## Examples:
## requests:
## cpu: 100m
## memory: 128Mi
requests: {}
## @section Traffic Exposure Parameters
## Kubernetes configuration
##
service:
## @param service.type Kubernetes Service type
##
type: ClusterIP
## @param service.port Service HTTP port
##
port: 8080
## @param service.httpsPort Service HTTPS port
##
httpsPort: 8443
##
## @param service.externalTrafficPolicy Enable client source IP preservation
## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
##
externalTrafficPolicy: Cluster
## Configure the ingress resource that allows you to access the
## ownCloud installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
## @param ingress.enabled Set to true to enable ingress record generation
##
enabled: true
## @param ingress.certManager Set this to true in order to add the corresponding annotations for cert-manager
##
certManager: true
## @param ingress.hostname Default host for the ingress resource
##
hostname: oc.example.com
## @param ingress.pathType Ingress path type
##
pathType: ImplementationSpecific
## @param ingress.annotations Ingress annotations
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
##
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
## e.g:
## kubernetes.io/ingress.class: nginx
##
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
## @param ingress.tls Enable TLS configuration for the hostname defined at ingress.hostname parameter
## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
## You can use the ingress.secrets parameter to create this TLS secret, relay on cert-manager to create it, or
## let the chart create self-signed certificates for you
##
tls: true
## @param ingress.extraHosts The list of additional hostnames to be covered with this ingress record.
## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
## Example:
## extraHosts:
## - name: owncloud.local
## path: /
##
extraHosts: []
## @param ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record.
## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## Example:
## extraTls:
## - hosts:
## - owncloud.local
## secretName: owncloud.local-tls
##
extraTls: []
## @param ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or -----BEGIN RSA PRIVATE KEY-----
## name should line up with a secretName set further up
##
## If it is not set and you're using cert-manager, this is unneeded, as it will create the secret for you
## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
## e.g:
## - name: owncloud.local-tls
## key:
## certificate:
##
secrets: []
## @section Metrics parameters
## Prometheus Exporter / Metrics
##
metrics:
enabled: false
Hi,
It seems that you want to restore an existing installation. In order to confirm that the issue lies in the owncloudskipinstall value, could you try setting it to false?
owncloudSkipInstall: false
Yes, I already tried installing with owncloudSkipInstall set to False and True. I got the same error both ways.
Could you also check that there are no leftovers from previous helm installations? It is strange that no config.php is present. Could you try installing with the default parameters, just with helm install testowncloud bitnami/owncloud
?
OK, I tried with default parameters and it works fine, so the problem must be with my values.
[jonathan@latitude ~]$ kubectl create ns testowncloud
[jonathan@latitude ~]$ helm install testowncloud bitnami/owncloud --set owncloudHost=testowncloud.local
[jonathan@latitude ~]$ kubectl get po
NAME READY STATUS RESTARTS AGE
testowncloud-mariadb-0 1/1 Running 0 95s
testowncloud-6f4964f999-79qmn 1/1 Running 0 96s
I'll try tearing down and redeploying. I set owncloudSkipInstall: false
because I've got a DB dump I want to restore from.
Complete teardown of all existing resources, including PVCs. Then install using the values.yaml
listed above.
[jonathan@latitude owncloud2]$ helm upgrade --install --create-namespace -n owncloud2 owncloud2 bitnami/owncloud --values values.yaml
[jonathan@latitude owncloud2]$ kubectl logs owncloud2-58df6f9979-tdj8v -f
owncloud 20:06:06.44
owncloud 20:06:06.45 Welcome to the Bitnami owncloud container
owncloud 20:06:06.45 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-owncloud
owncloud 20:06:06.45 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-owncloud/issues
owncloud 20:06:06.45
owncloud 20:06:06.45 INFO ==> ** Starting ownCloud setup **
owncloud 20:06:06.47 INFO ==> Configuring the HTTP port
owncloud 20:06:06.48 INFO ==> Configuring the HTTPS port
owncloud 20:06:06.50 INFO ==> Configuring PHP options
owncloud 20:06:06.51 INFO ==> Validating settings in MYSQL_CLIENT_* env vars
owncloud 20:06:06.59 INFO ==> Trying to connect to the database server
owncloud 20:06:56.70 INFO ==> Ensuring ownCloud directories exist
owncloud 20:06:56.71 INFO ==> An already initialized ownCloud database was provided, configuration will be skipped
owncloud 20:06:56.73 INFO ==> Running installation script to create configuration (using local SQLite database)
owncloud 20:07:58.79 INFO ==> Updating configuration file with values provided via environment variables
[jonathan@latitude owncloud2]$ kubectl logs -f owncloud2-58df6f9979-tdj8v
owncloud 20:08:01.12
owncloud 20:08:01.13 Welcome to the Bitnami owncloud container
owncloud 20:08:01.13 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-owncloud
owncloud 20:08:01.13 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-owncloud/issues
owncloud 20:08:01.13
owncloud 20:08:01.13 INFO ==> ** Starting ownCloud setup **
owncloud 20:08:01.15 INFO ==> Configuring the HTTP port
owncloud 20:08:01.16 INFO ==> Configuring the HTTPS port
owncloud 20:08:01.18 INFO ==> Configuring PHP options
owncloud 20:08:01.19 INFO ==> Validating settings in MYSQL_CLIENT_* env vars
owncloud 20:08:01.29 INFO ==> Restoring persisted ownCloud installation
realpath: /bitnami/owncloud/config/config.php: No such file or directory
[jonathan@latitude owncloud2]$ kubectl get po
NAME READY STATUS RESTARTS AGE
owncloud2-mariadb-0 1/1 Running 0 4m40s
owncloud2-58df6f9979-tdj8v 0/1 CrashLoopBackOff 4 4m40s
It's like it never completes installation on the first run, gets restarted, and from that point on it can't find the config.php
.
Just a bit more info on this. I've done a clean installation of the helm chart from my values.yaml
. I follow the logs when it first starts up:
[jonathan@poseidon owncloud2]$ kubectl logs -f owncloud2-67d547d49-79qbg owncloud
owncloud 16:04:21.03
owncloud 16:04:21.04 Welcome to the Bitnami owncloud container
owncloud 16:04:21.04 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-owncloud
owncloud 16:04:21.04 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-owncloud/issues
owncloud 16:04:21.05
owncloud 16:04:21.05 INFO ==> ** Starting ownCloud setup **
owncloud 16:04:21.13 INFO ==> Configuring the HTTP port
owncloud 16:04:21.16 INFO ==> Configuring the HTTPS port
owncloud 16:04:21.23 INFO ==> Configuring PHP options
owncloud 16:04:21.26 INFO ==> Validating settings in MYSQL_CLIENT_* env vars
owncloud 16:04:21.59 INFO ==> Trying to connect to the database server
owncloud 16:05:03.22 INFO ==> Ensuring ownCloud directories exist
owncloud 16:05:03.45 INFO ==> An already initialized ownCloud database was provided, configuration will be skipped
owncloud 16:05:03.46 INFO ==> Running installation script to create configuration (using local SQLite database)
owncloud 16:06:48.41 INFO ==> Updating configuration file with values provided via environment variables
Here the logs -f
dies because the pod gets restarted, so I resume it now the pod has restarted:
[jonathan@poseidon owncloud2]$ kubectl logs -f owncloud2-67d547d49-79qbg owncloud
owncloud 16:06:53.21
owncloud 16:06:53.26 Welcome to the Bitnami owncloud container
owncloud 16:06:53.27 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-owncloud
owncloud 16:06:53.28 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-owncloud/issues
owncloud 16:06:53.28
owncloud 16:06:53.28 INFO ==> ** Starting ownCloud setup **
owncloud 16:06:53.41 INFO ==> Configuring the HTTP port
owncloud 16:06:53.45 INFO ==> Configuring the HTTPS port
owncloud 16:06:53.50 INFO ==> Configuring PHP options
owncloud 16:06:53.53 INFO ==> Validating settings in MYSQL_CLIENT_* env vars
owncloud 16:06:53.78 INFO ==> Restoring persisted ownCloud installation
realpath: /bitnami/owncloud/config/config.php: No such file or directory
I have looked inside the volume and indeed there is no config.php
, in fact there are no files or directories at all. There is not a general problem with the storage - I've got loads of other applications provisioning and using volumes on this cluster.
The pod getting restarted always seems to happen when the pod is around 2m40s. I added a long startupProbe
and disabled the livenessProbe
in case it was timing out, but I don't think this is the cause.
I assume the problem is with the scripts that run that generate and save the owncloud config on first run. I don't know why it mentions SQLite.
Hi,
This is the logic it is following:
info "An already initialized ownCloud database was provided, configuration will be skipped"
# ownCloud does not have any support for providing any existing database
# However, it does support SQLite as database, which is enabled by default in our ownCloud images
# Therefore we will install ownCloud with SQLite, then manually change the configuration to use the appropriate DB
info "Running installation script to create configuration (using local SQLite database)"
local data_dir
# If the data directory was not provided / is empty, populate it as if it were a new installation
if is_mounted_dir_empty "$OWNCLOUD_DATA_DIR"; then
data_dir="$OWNCLOUD_DATA_DIR"
else
data_dir="$(mktemp -d)"
# When running mktemp as 'root' it sets 700 permissions, we need more permissions
am_i_root && configure_permissions_ownership "$data_dir" -d "770" -u "$WEB_SERVER_DAEMON_USER" -g "root"
fi
owncloud_execute_occ maintenance:install "${owncloud_cli_args[@]}" --database sqlite --data-dir "$data_dir"
# Update configuration file
# These differences can be generated manually by installing with SQLite and comparing configuration files
info "Updating configuration file with values provided via environment variables"
owncloud_conf_set "mysql.utf8mb4" "true" "boolean"
owncloud_conf_set "dbhost" "${OWNCLOUD_DATABASE_HOST}:${OWNCLOUD_DATABASE_PORT_NUMBER}"
owncloud_conf_set "dbuser" "$OWNCLOUD_DATABASE_USER"
owncloud_conf_set "dbpassword" "$OWNCLOUD_DATABASE_PASSWORD"
# NOTE: These options must be last and in a *very specific order*, or 'occ config:system:set' calls will fail
# - 'dbname' will cause ownCloud not to recognize the SQLite db, failing to set any options
# - same with 'dbtableprefix', but its default value for non-SQLite dbs is 'oc_' (it's a cosmetic change)
# - Due to 'occ' not working after changing the above fields, we must manually set the DB type via a 'sed' substitution
# - 'datadirectory' stores the SQLite database, if it is changed before the DB is configured, 'occ' will fail
owncloud_conf_set "dbname" "$OWNCLOUD_DATABASE_NAME"
replace_in_file "$OWNCLOUD_CONF_FILE" "('dbtype'\s*=>\s*)'[^']*'" "\1'mysql'"
owncloud_conf_set "dbtableprefix" "oc_"
owncloud_conf_set "datadirectory" "$OWNCLOUD_DATA_DIR"
owncloud_upgrade_database_schema
Could you try installing the chart with image.debug = true with a clean installation? It should provide more insight about the issue
Hi again, sorry to go quiet. I believe this issue was down to my own misunderstanding of the owncloudSkipInstall
option.
I was setting owncloudSkipInstall: true
because I wanted to restore from a database dump form a previous (non-Bitnami) deployment of ownCloud. I believe in my case, for this migration I actually needed owncloudSkipInstall: false
and then manually import the database dump and files.
I did try installing with owncloudSkipInstall: false
but still ran into errors. I think this was probably due to a PVC remaining even after I did a helm delete
. I wasn't able to recreate later on.
So, apologies for a false report and thanks for your help on the issue. Have a good weekend! :+1:
Hi @djjudas21 , Don't worry. Thanks for coming back and sharing your findings.
Which chart: bitnami/owncloud 10.2.24
Describe the bug Greenfield installation of ownCloud via Helm chart. When the ownCloud container starts up, it starts crashlooping due to a bad path:
To Reproduce Steps to reproduce the behavior:
helm upgrade --install --create-namespace -n owncloud2 owncloud2 bitnami/owncloud --values values.yaml
Expected behavior The container should start up
Version of Helm and Kubernetes:
helm version
:kubectl version
: