argoproj / argo-workflows

Workflow Engine for Kubernetes
https://argo-workflows.readthedocs.io/
Apache License 2.0
14.81k stars 3.17k forks source link

JSON logging does not work for `resource` template #12788

Open tomashejatko opened 5 months ago

tomashejatko commented 5 months ago

Pre-requisites

What happened/what did you expect to happen?

Worfklows of type "resource" are not logging in JSON format, but output log is mixed:

time="2024-03-12T15:03:58.630Z" level=info msg="Starting Workflow Executor" version=v3.5.5
{"level":"info","msg":"Starting Workflow Executor","time":"2024-03-12T15:03:57.648Z","version":"v3.5.5"}
{"Duration":1000000000,"Factor":1.6,"Jitter":0.5,"Steps":5,"level":"info","msg":"Using executor retry strategy","time":"2024-03-12T15:03:57.651Z"}
{"deadline":"0001-01-01T00:00:00Z","includeScriptOutput":false,"level":"info","msg":"Executor initialized","namespace":"dev36560","podName":"hook-backend-reset-jpcwz-get-cm-940356409","templateName":"get-cm","time":"2024-03-12T15:03:57.651Z","version":"\u0026Version{Version:v3.5.5,BuildDate:2024-02-29T20:59:20Z,GitCommit:c80b2e91ebd7e7f604e88442f45ec630380effa0,GitTag:v3.5.5,GitTreeState:clean,GoVersion:go1.21.7,Compiler:gc,Platform:linux/amd64,}"}
{"level":"info","msg":"Loading manifest to /tmp/manifest.yaml","time":"2024-03-12T15:03:57.683Z"}
time="2024-03-12T15:03:58.633Z" level=info msg="Using executor retry strategy" Duration=1s Factor=1.6 Jitter=0.5 Steps=5
time="2024-03-12T15:03:58.633Z" level=info msg="Executor initialized" deadline="0001-01-01 00:00:00 +0000 UTC" includeScriptOutput=false namespace=dev36560 podName=hook-backend-reset-jpcwz-get-cm-940356409 templateName=get-cm version="&Version{Version:v3.5.5,BuildDate:2024-02-29T20:59:20Z,GitCommit:c80b2e91ebd7e7f604e88442f45ec630380effa0,GitTag:v3.5.5,GitTreeState:clean,GoVersion:go1.21.7,Compiler:gc,Platform:linux/amd64,}"
time="2024-03-12T15:03:58.645Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
time="2024-03-12T15:03:58.645Z" level=info msg="kubectl get -f /tmp/manifest.yaml -o json"
time="2024-03-12T15:03:58.674Z" level=info msg="Resource: dev36560/configmap./backend-reset. SelfLink: api/v1/namespaces/dev36560/configmaps/backend-reset"
time="2024-03-12T15:03:58.674Z" level=info msg="Saving resource output parameters"
{"level":"info","msg":"Start loading input artifacts...","time":"2024-03-12T15:03:57.683Z"}
{"level":"info","msg":"Alloc=7178 TotalAlloc=12946 Sys=24165 NumGC=4 Goroutines=4","time":"2024-03-12T15:03:57.683Z"}
time="2024-03-12T15:03:58.674Z" level=info msg="kubectl -n dev36560 get configmap./backend-reset -o jsonpath={.data.wipe}"
time="2024-03-12T15:03:58.715Z" level=info msg=kubectl args="[kubectl -n dev36560 get configmap./backend-reset -o jsonpath={.data.wipe}]" error="<nil>" out=no
time="2024-03-12T15:03:58.715Z" level=info msg="Saved output parameter: wipe, value: no"
{"argo":true,"error":null,"level":"info","msg":"sub-process exited","time":"2024-03-12T15:03:59.606Z"}

Version

3.5.5

Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.

{
  "name": "get-cm",
  "inputs": {
    "parameters": [
      {
        "name": "serviceName"
      }
    ]
  },
  "outputs": {
    "parameters": [
      {
        "name": "wipe",
        "valueFrom": {
          "jsonPath": "{.data.wipe}"
        }
      }
    ]
  },
  "metadata": {},
  "resource": {
    "action": "get",
    "manifest": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n    name: {{inputs.parameters.serviceName}}-reset\n"
  }
}

Logs from the workflow controller

{"Phase":"","ResourceVersion":"5025152597","level":"info","msg":"Processing workflow","namespace":"dev36560","time":"2024-03-12T15:03:57.082Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"warning","msg":"Non-transient error: configmaps \"artifact-repositories\" not found","time":"2024-03-12T15:03:57.090Z"}
{"artifactRepositoryRef":{"default":true},"level":"info","msg":"resolved artifact repository","time":"2024-03-12T15:03:57.090Z"}
{"level":"info","msg":"Task-result reconciliation","namespace":"dev36560","numObjs":0,"time":"2024-03-12T15:03:57.090Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Updated phase  -\u003e Running","namespace":"dev36560","time":"2024-03-12T15:03:57.090Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"warning","msg":"Node was nil, will be initialized as type Skipped","namespace":"dev36560","time":"2024-03-12T15:03:57.090Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"was unable to obtain node for , letting display name to be nodeName","namespace":"dev36560","time":"2024-03-12T15:03:57.090Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Steps node hook-backend-reset-jpcwz initialized Running","namespace":"dev36560","time":"2024-03-12T15:03:57.090Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"StepGroup node hook-backend-reset-jpcwz-2771728536 initialized Running","namespace":"dev36560","time":"2024-03-12T15:03:57.090Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"warning","msg":"Node was nil, will be initialized as type Skipped","namespace":"dev36560","time":"2024-03-12T15:03:57.090Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Pod node hook-backend-reset-jpcwz-940356409 initialized Pending","namespace":"dev36560","time":"2024-03-12T15:03:57.090Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Created pod: hook-backend-reset-jpcwz[0].get-cm (hook-backend-reset-jpcwz-get-cm-940356409)","namespace":"dev36560","time":"2024-03-12T15:03:57.125Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Workflow step group node hook-backend-reset-jpcwz-2771728536 not yet completed","namespace":"dev36560","time":"2024-03-12T15:03:57.126Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"TaskSet Reconciliation","namespace":"dev36560","time":"2024-03-12T15:03:57.126Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"reconcileAgentPod","namespace":"dev36560","time":"2024-03-12T15:03:57.126Z","workflow":"hook-backend-reset-jpcwz"}
{"Workflow Size":22557,"level":"info","msg":"Workflow to be dehydrated","time":"2024-03-12T15:03:57.126Z"}
{"level":"info","msg":"Workflow update successful","namespace":"dev36560","phase":"Running","resourceVersion":"5025152621","time":"2024-03-12T15:03:57.147Z","workflow":"hook-backend-reset-jpcwz"}
{"Phase":"Running","ResourceVersion":"5025152621","level":"info","msg":"Processing workflow","namespace":"dev36560","time":"2024-03-12T15:04:07.126Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Task-result reconciliation","namespace":"dev36560","numObjs":1,"time":"2024-03-12T15:04:07.126Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"task-result changed","namespace":"dev36560","nodeID":"hook-backend-reset-jpcwz-940356409","time":"2024-03-12T15:04:07.126Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node changed","namespace":"dev36560","new.message":"","new.phase":"Succeeded","new.progress":"0/1","nodeID":"hook-backend-reset-jpcwz-940356409","old.message":"","old.phase":"Pending","old.progress":"0/1","time":"2024-03-12T15:04:07.126Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Step group node hook-backend-reset-jpcwz-2771728536 successful","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz-2771728536 phase Running -\u003e Succeeded","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz-2771728536 finished: 2024-03-12 15:04:07.127389443 +0000 UTC","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"StepGroup node hook-backend-reset-jpcwz-2838691917 initialized Running","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"SG Outbound nodes of hook-backend-reset-jpcwz-940356409 are [hook-backend-reset-jpcwz-940356409]","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Skipping hook-backend-reset-jpcwz[1].wipe-database: when 'no == yes' evaluated false","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Skipped node hook-backend-reset-jpcwz-1176794328 initialized Skipped (message: when 'no == yes' evaluated false)","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Skipping hook-backend-reset-jpcwz[1].wipe-redis-rabbit: when 'no == yes' evaluated false","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Skipped node hook-backend-reset-jpcwz-3061444223 initialized Skipped (message: when 'no == yes' evaluated false)","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Step group node hook-backend-reset-jpcwz-2838691917 successful","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz-2838691917 phase Running -\u003e Succeeded","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz-2838691917 finished: 2024-03-12 15:04:07.127629886 +0000 UTC","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"StepGroup node hook-backend-reset-jpcwz-3912312438 initialized Running","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"SG Outbound nodes of hook-backend-reset-jpcwz-1176794328 are [hook-backend-reset-jpcwz-1176794328]","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"SG Outbound nodes of hook-backend-reset-jpcwz-3061444223 are [hook-backend-reset-jpcwz-3061444223]","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Skipping hook-backend-reset-jpcwz[2].wipe-elasticsearch: when 'no == yes' evaluated false","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Skipped node hook-backend-reset-jpcwz-1313095581 initialized Skipped (message: when 'no == yes' evaluated false)","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Step group node hook-backend-reset-jpcwz-3912312438 successful","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz-3912312438 phase Running -\u003e Succeeded","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz-3912312438 finished: 2024-03-12 15:04:07.12774789 +0000 UTC","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"StepGroup node hook-backend-reset-jpcwz-2771287251 initialized Running","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"SG Outbound nodes of hook-backend-reset-jpcwz-1313095581 are [hook-backend-reset-jpcwz-1313095581]","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Skipping hook-backend-reset-jpcwz[3].update-cm: when 'no == yes' evaluated false","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Skipped node hook-backend-reset-jpcwz-3060846225 initialized Skipped (message: when 'no == yes' evaluated false)","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Step group node hook-backend-reset-jpcwz-2771287251 successful","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz-2771287251 phase Running -\u003e Succeeded","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz-2771287251 finished: 2024-03-12 15:04:07.127856308 +0000 UTC","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Outbound nodes of hook-backend-reset-jpcwz-3060846225 is [hook-backend-reset-jpcwz-3060846225]","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Outbound nodes of hook-backend-reset-jpcwz is [hook-backend-reset-jpcwz-3060846225]","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz phase Running -\u003e Succeeded","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"node hook-backend-reset-jpcwz finished: 2024-03-12 15:04:07.127886147 +0000 UTC","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"TaskSet Reconciliation","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"reconcileAgentPod","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Updated phase Running -\u003e Succeeded","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"level":"info","msg":"Marking workflow completed","namespace":"dev36560","time":"2024-03-12T15:04:07.127Z","workflow":"hook-backend-reset-jpcwz"}
{"Workflow Size":25319,"level":"info","msg":"Workflow to be dehydrated","time":"2024-03-12T15:04:07.128Z"}
{"action":"deletePod","key":"dev36560/hook-backend-reset-jpcwz-1340600742-agent/deletePod","level":"info","msg":"cleaning up pod","time":"2024-03-12T15:04:07.133Z"}
{"level":"info","msg":"Workflow update successful","namespace":"dev36560","phase":"Succeeded","resourceVersion":"5025154988","time":"2024-03-12T15:04:07.144Z","workflow":"hook-backend-reset-jpcwz"}
{"action":"labelPodCompleted","key":"dev36560/hook-backend-reset-jpcwz-get-cm-940356409/labelPodCompleted","level":"info","msg":"cleaning up pod","time":"2024-03-12T15:04:07.172Z"}

Logs from in your workflow's wait container

There is no wait container...

kubectl logs -c wait -l workflows.argoproj.io/workflow=hook-backend-reset,workflow.argoproj.io/phase!=Succeeded
error: container wait is not valid for pod hook-backend-reset-get-cm-4199754250
agilgur5 commented 5 months ago

Some more information was provided on Slack:

 Containers:
   main:
     Container ID:  containerd://168102524146065995dfd90b3f78f439b3c2570f2dd544dbbd5533f03ca9de7a
     Image:         515719629808.dkr.ecr.eu-west-1.amazonaws.com/quay/argoproj/argoexec:v3.5.5
     Image ID:      515719629808.dkr.ecr.eu-west-1.amazonaws.com/quay/argoproj/argoexec@sha256:32a568bd1ecb2691a61aa4a646d90b08fe5c4606a2d5cbf264565b1ced98f12b
     Port:          <none>
     Host Port:     <none>
     Command:
       /var/run/argo/argoexec
       emissary
       --loglevel
       info
       --log-format
       json
       --
       argoexec
       resource
       get
     State:          Terminated
       Reason:       Completed
       Exit Code:    0
       Started:      Thu, 07 Mar 2024 17:40:47 +0100
       Finished:     Thu, 07 Mar 2024 17:40:48 +0100
     Ready:          False
     Restart Count:  0
     Environment:
       ARGO_POD_NAME:                      hook-backend-reset-get-cm-4199754250 (v1:metadata.name)
       ARGO_POD_UID:                        (v1:metadata.uid)
       GODEBUG:                            x509ignoreCN=0
       ARGO_WORKFLOW_NAME:                 hook-backend-reset
       ARGO_WORKFLOW_UID:                  213bf655-d283-464f-a0ff-d2ac374ad0f9
       ARGO_CONTAINER_NAME:                main
       ARGO_TEMPLATE:                      {"name":"get-cm","inputs":{"parameters":[{"name":"serviceName","value":"backend"}]},"outputs":{"parameters":[{"name":"wipe","valueFrom":{"jsonPath":"{.data.wipe}"}}]},"metadata":{},"resource":{"action":"get","manifest":"apiVersion: v1\nkind: ConfigMap\nmetadata:\n    name: backend-reset\n"}}
       ARGO_NODE_ID:                       hook-backend-reset-4199754250
       ARGO_INCLUDE_SCRIPT_OUTPUT:         false
       ARGO_DEADLINE:                      0001-01-01T00:00:00Z
       ARGO_PROGRESS_FILE:                 /var/run/argo/progress
       ARGO_PROGRESS_PATCH_TICK_DURATION:  1m0s
       ARGO_PROGRESS_FILE_TICK_DURATION:   3s
       AWS_STS_REGIONAL_ENDPOINTS:         regional
       AWS_DEFAULT_REGION:                 eu-west-1
       AWS_REGION:                         eu-west-1
agilgur5 commented 5 months ago

I guess the initConfig function hasn't run everywhere? ~It does seem missing from the initExecutor function where several of the lines are from.~ EDIT: It runs in PersistentPreRun, which per the Cobra docs, is supposed to be inherited by all child commands. In this case, all child commands of argoexec should run it, which should include both init and resource... 🤔

Worfklows [sic] of type "resource" are not logging in JSON format, but output log is mixed:

What command did you use to get these logs? Are these logs for a single container? The logs you provided have standard and JSON logs duplicated -- it has both -- I'm not sure if that's because you retrieved logs from multiple containers or Pods?

tomashejatko commented 5 months ago

Hello, my shell alias for logs is kubectl logs --max-log-requests=6 -f --all-containers, so it is from all containers. I have added --prefix=true and there is result, seems that init container is okay, while main not.

[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.029Z" level=info msg="Starting Workflow Executor" version=v3.5.5
[pod/hook-backend-reset-get-cm-4199754250/init] {"Duration":1000000000,"Factor":1.6,"Jitter":0.5,"Steps":5,"level":"info","msg":"Using executor retry strategy","time":"2024-03-13T07:35:03.123Z"}
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.032Z" level=info msg="Using executor retry strategy" Duration=1s Factor=1.6 Jitter=0.5 Steps=5
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.032Z" level=info msg="Executor initialized" deadline="0001-01-01 00:00:00 +0000 UTC" includeScriptOutput=false namespace=dev36560 podName=hook-backend-reset-get-cm-4199754250 templateName=get-cm version="&Version{Version:v3.5.5,BuildDate:2024-02-29T20:59:20Z,GitCommit:c80b2e91ebd7e7f604e88442f45ec630380effa0,GitTag:v3.5.5,GitTreeState:clean,GoVersion:go1.21.7,Compiler:gc,Platform:linux/amd64,}"
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.046Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.046Z" level=info msg="kubectl get -f /tmp/manifest.yaml -o json"
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.073Z" level=info msg="Resource: dev36560/configmap./backend-reset. SelfLink: api/v1/namespaces/dev36560/configmaps/backend-reset"
[pod/hook-backend-reset-get-cm-4199754250/init] {"deadline":"0001-01-01T00:00:00Z","includeScriptOutput":false,"level":"info","msg":"Executor initialized","namespace":"dev36560","podName":"hook-backend-reset-get-cm-4199754250","templateName":"get-cm","time":"2024-03-13T07:35:03.123Z","version":"\u0026Version{Version:v3.5.5,BuildDate:2024-02-29T20:59:20Z,GitCommit:c80b2e91ebd7e7f604e88442f45ec630380effa0,GitTag:v3.5.5,GitTreeState:clean,GoVersion:go1.21.7,Compiler:gc,Platform:linux/amd64,}"}
[pod/hook-backend-reset-get-cm-4199754250/init] {"level":"info","msg":"Loading manifest to /tmp/manifest.yaml","time":"2024-03-13T07:35:03.153Z"}
[pod/hook-backend-reset-get-cm-4199754250/init] {"level":"info","msg":"Start loading input artifacts...","time":"2024-03-13T07:35:03.153Z"}
[pod/hook-backend-reset-get-cm-4199754250/init] {"level":"info","msg":"Alloc=7214 TotalAlloc=12946 Sys=24421 NumGC=4 Goroutines=4","time":"2024-03-13T07:35:03.153Z"}
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.073Z" level=info msg="Saving resource output parameters"
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.073Z" level=info msg="kubectl -n dev36560 get configmap./backend-reset -o jsonpath={.data.wipe}"
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.108Z" level=info msg=kubectl args="[kubectl -n dev36560 get configmap./backend-reset -o jsonpath={.data.wipe}]" error="<nil>" out=yes
[pod/hook-backend-reset-get-cm-4199754250/main] time="2024-03-13T07:35:04.108Z" level=info msg="Saved output parameter: wipe, value: yes"
[pod/hook-backend-reset-get-cm-4199754250/main] {"argo":true,"error":null,"level":"info","msg":"sub-process exited","time":"2024-03-13T07:35:05.004Z"}
agilgur5 commented 5 months ago

Hello, my shell alias for logs is kubectl logs --max-log-requests=6 -f --all-containers, so it is from all containers. I have added --prefix=true and there is result, seems that init container is okay, while main not.

That's very helpful for debugging, thanks!

agilgur5 commented 5 months ago
Command:
       /var/run/argo/argoexec
       emissary
       --loglevel
       info
       --log-format
       json
       --
       argoexec
       resource
       get

I think this might be the issue, argoexec emissary is getting the --log-format arg, but argoexec resource isn't. So the executor/resource code itself seems to handle the args properly, but the args aren't getting passed through to it to begin with, if I'm understanding correctly.

EDIT: this does seem to be the difference between the init and main container -- the init container runs argoexec init directly with all of the args

agilgur5 commented 5 months ago

Not the prettiest, but I believe you can workaround this using podSpecPatch

tomashejatko commented 5 months ago

I can confirm that workaround is working :) I have used this for "resource get":

podSpecPatch: '{"containers":[{"name":"main", "command": ["/var/run/argo/argoexec","emissary","--loglevel","info","--log-format","json","--","argoexec","resource","get","--log-format","json"]}]}',