CloudVE / galaxy-helm

Minimal setup required to run Galaxy under Kubernetes
MIT License
0 stars 1 forks source link

Config files pushed by chart should be optional (to allow decision on using the ones in the container) #49

Open pcm32 opened 5 years ago

pcm32 commented 5 years ago

I tried using the setup with a container that extends galaxy/galaxy:19.05 with all the config files that I use in my previous setup (as it also contains whitelists, dynamic destinations, etc), but even if I don't set configs.<some-config> = file-content, these files get mounted from the config files, overriding the ones I want to use.

I used the following helm config file with the chart:

image:
  repository: pcm32/galaxy-helm-v3
  tag: 19.05
  pullPolicy: IfNotPresent

configs:
  galaxy.yml: |
    uwsgi:
      virtualenv: /galaxy/server/.venv
      processes: 1
      http: 0.0.0.0:8080
      static-map: /static/style=/galaxy/server/static/style/blue
      static-map: /static=/galaxy/server/static
      static-map: /favicon.ico=/galaxy/server/static/favicon.ico
      pythonpath: /galaxy/server/lib
      thunder-lock: true
      manage-script-name: true
      mount: {{.Values.ingress.path}}=galaxy.webapps.galaxy.buildapp:uwsgi_app()
      buffer-size: 16384
      offload-threads: 2
      threads: 4
      die-on-term: true
      master: true
      hook-master-start: unix_signal:2 gracefully_kill_them_all
      enable-threads: true
      py-call-osafterfork: true
    galaxy:
      database_connection: 'postgresql://{{.Values.postgresql.galaxyDatabaseUser}}:{{.Values.postgresql.galaxyDatabasePassword}}@{{ template "galaxy-postgresql.fullname" . }}/galaxy'
      integrated_tool_panel_config: "/galaxy/server/config/mutable/integrated_tool_panel.xml"
      containers_resolvers_config_file: "/galaxy/server/config/container_resolvers_conf.xml"
      job_config_file: "/galaxy/server/config/job_conf.xml"
      brand: "scRNA-Seq Tertiary A."
      admin_users: admin@email.co.uk
      allow_user_creation: true
      allow_user_deletion: true
      cleanup_job: always
      enable_beta_mulled_containers: true
      conda_auto_install: false

extraEnv:
  - name: GALAXY_DB_USER_PASSWORD
    valueFrom:
      secretKeyRef:
        name: "{{ .Release.Name }}-galaxy-db-password"
        key: galaxy-db-password
  - name: GALAXY_RUNNERS_K8S_PERSISTENT_VOLUME_CLAIMS
    value: '{{ template "galaxy.pvcname" . }}:{{.Values.persistence.mountPath}}'
  - name: GALAXY_RUNNERS_K8S_NAMESPACE
    value: "{{ .Release.Namespace }}"
  - name: GALAXY_RUNNERS_K8S_RUN_AS_USER_ID
    value: "101"
  - name: GALAXY_RUNNERS_K8S_RUN_AS_GROUP_ID
    value: "101"
  - name: GALAXY_RUNNERS_K8S_SUPPLEMENTAL_GROUP_ID
    value: "101"
  - name: GALAXY_RUNNERS_K8S_FS_GROUP_ID
    value: "101"
  - name: GALAXY_DESTINATIONS_DOCKER_DEFAULT
    value: k8s_default
  - name: GALAXY_DESTINATIONS_NO_DOCKER_DEFAULT
    value: local_no_container
  - name: GALAXY_DESTINATIONS_DEFAULT
    value: dynamic-k8s-dispatcher
  - name: GALAXY_RUNNERS_ENABLE_LOCAL
    value: "true"
  - name: GALAXY_RUNNERS_ENABLE_K8S
    value: "true"

doing:

cd galaxy-kuberntes/galaxy
helm install -f <file-above>.yaml .

Then, inside the running container on k8s, I see the file injected by the chart for job_conf.xml instead of what I loaded into the container specified.

pcm32 commented 5 years ago

The same happens with the containers_resolvers_conf.xml that I'm trying to use from the container.

pcm32 commented 5 years ago

I could put the file in a different path and then point from the galaxy.yml config file to those... but maybe this is a workaround and the fact that configs: doesn't contain a file should be enough for it not to be injected by the chart...

almahmoud commented 5 years ago

To my understanding, the way Helm values work is that values.yml is the default and any configs not specified in the new values file or with --set will always inherit from that default file, meaning that for the ones we pre-specified, not adding them does not mean they do not exist but rather that the values won't be changed. I believe the reason we included those config files specifically is that they are in most cases needed. Is there any particular reason for baking the config files into the image rather than specifying them through values? The only way that I can think to not have this problem is remove all configs from the default values file (or maybe only keep the galaxy.yml and mark it as mandatory) and have a values-minimal, values-cvmfs, etc... with configs, but I believe that will essentially make the default chart unusable out of the box, which I think is not good helm practice. Changing galaxy.yml to point at a different location and putting the baked in files there seems like a decent workaround. Perhaps we should invert it and leave the default config location intact to be somewhat more compatible with pre-built galaxy images and by default have the configs from the configmap nested one more down so that they are in their own directory as to make it easier to ignore them?

nuwang commented 5 years ago

In addition to what Alex said, there's already a null/empty check in place: https://github.com/CloudVE/galaxy-kubernetes/blob/666f36537741d41cf7b167cfecc4da571d04146b/galaxy/templates/configs-galaxy.yaml#L18 so the idea was that you could set the entry to null in your values file if you just want to revert to the container version.

configs:
  job_conf.xml: ~

Alternatively, you can also try helm install --set "configs.job_conf\.xml"=null