argoproj / argo-workflows

Workflow Engine for Kubernetes
https://argo-workflows.readthedocs.io/
Apache License 2.0
14.99k stars 3.19k forks source link

support envFrom to use configMap as env vars #3310

Closed linehrr closed 4 years ago

linehrr commented 4 years ago

Summary

standard k8s support

      envFrom:
      - configMapRef:
          name: special-config

which can import configMap as env vars into container.

Motivation

we have hundreds of env vars to be imported, if we have to specify one by one that will be too tedious.

Proposal

implement the same envFrom syntax from k8s.

simster7 commented 4 years ago

Do you mean to use them in a container? We use the K8s Container type directly, so you should be able to use this field when defining one.

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: hello-world-
  labels:
    workflows.argoproj.io/archive-strategy: false
spec:
  entrypoint: whalesay
  templates:
  - name: whalesay
    container:
      image: docker/whalesay:latest
      command: [cowsay]
      args: ["hello world"]
      envFrom:
        ...
linehrr commented 4 years ago

2020/06/29 11:03:57 Failed to parse workflow template: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go struct field Container.spec.workflowSpec.templates.container.envFrom of type []v1.EnvFromSource

when I use it, I got this from argo submit.

         envFrom:
           configMapRef:
             name: cluster-env

this is the way I am using it.

simster7 commented 4 years ago

Can you post the full Workflow you're trying to submit? The Container.spec.workflowSpec. part of the error message seems to indicate the you're submitting a malformed object

linehrr commented 4 years ago
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
  name: unified-report-cron
  namespace: data-scheduler
spec:
  schedule: "5 8 * * *"
  workflowSpec:
    volumes:
      - name: cluster-conf
        configMap:
          name: cluster-configs
    nodeSelector:
      worker: spark
    hostNetwork: true
    entrypoint: main-entry
    templates:
    - name: druid-template
      inputs:
        parameters:
        - name: schema
      container:
        image: mywork/data-orchestrator:latest
        imagePullPolicy: Always
        command: ["/bin/bash", "-c"]
        args:
          - >-
            for i in {0..30}; do
              day=$(date --date="now - $i day" +%Y-%m-%d);
              inv {{inputs.parameters.schema}} -d$day -odruid3-overlord.mywork.org;
              sleep 30;
            done
    - name: report-template
      inputs:
        parameters:
        - name: command
        - name: hdfs-path
        - name: hdfs-tmp-path
      container:
        image: mywork/aa-data-unified-revenue:0.25.2
        envFrom:
          configMapRef:
            name: cluster-env
        env:
          - name: JOB_NAME
            value: reporting-job
        volumeMounts:
          - name: cluster-conf
            mountPath: /opt/cluster-conf
        command: ["{{inputs.parameters.command}}"]

please see above. envFrom is under container spec. and it looks to be the right place as well. because it works for the normal deployment.

simster7 commented 4 years ago

envFrom is actually a list. Try

        envFrom:
          - configMapRef:
              name: cluster-env
linehrr commented 4 years ago

@simster7 thanks ! that was the issue ! I will close this.

jomach commented 2 years ago

secretRef seems not to work :(