argoproj / argo-workflows

Workflow Engine for Kubernetes
https://argo-workflows.readthedocs.io/
Apache License 2.0
14.64k stars 3.13k forks source link

Workflow processing fails to complete due to incomplete WorkflowTaskResult from interrupted pod #12993

Open jesseanttila-cai opened 2 months ago

jesseanttila-cai commented 2 months ago

Pre-requisites

What happened/what you expected to happen?

As part of #12402 (included from v3.5.3 onwards), workflow pod wait-container behavior was changed to create a placeholder (incomplete) WorkflowTaskResult before waiting for the main-container to complete.

https://github.com/argoproj/argo-workflows/blob/0fdf74511d4671cf0c8c334aa2d90ecd61c5acce/cmd/argoexec/commands/wait.go#L38-L42

The WorkflowTaskResult is finalized after output artifacts, logs etc. have been handled:

https://github.com/argoproj/argo-workflows/blob/0fdf74511d4671cf0c8c334aa2d90ecd61c5acce/cmd/argoexec/commands/wait.go#L34

If the wait-container is interrupted in a way that prevents FinalizeOutput from being called (e.g. pod deletion without sufficient grace period), an incomplete WorkflowTaskResult remains with the workflows.argoproj.io/report-outputs-completed label set to false. Retries of the same task will produce additional WorkflowTaskResults and will not mark the previous one complete. This leaves the workflow stuck in Processing state until the WorkflowTaskResult is manually edited to mark it complete.

The reproduction example workflow simulates forced pod deletion using a pod that deletes itself, leaving behind an incomplete WorkflowTaskResult. The included workflow controller log snippet shows the resulting processing loop.

This issue may be one of the causes of #12103.

Version

v3.5.3

Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: workflow-hang-example-
spec:
  entrypoint: self-delete
  templates:
    - name: self-delete
      retryStrategy:
        limit: "1"
        retryPolicy: Always
      podSpecPatch: |
        containers:
          - name: main
            env:
            - name: RETRY_COUNT
              value: "{{retries}}"
        terminationGracePeriodSeconds: 0
      container:
        image: bitnami/kubectl:latest
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
        # Simulate interruption via forced pod deletion on first attempt, complete successfully on retry
        command: ["sh", "-c", "sleep 10; [ $RETRY_COUNT -eq 0 ] && kubectl delete pod $POD_NAME; sleep 10"]

Logs from the workflow controller

time="2024-04-29T10:31:58.427Z" level=info msg="Processing workflow" Phase=Running ResourceVersion=26673983 namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.428Z" level=info msg="Task-result reconciliation" namespace=ext-namespace numObjs=2 workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="task result:\n&WorkflowTaskResult{ObjectMeta:{workflow-hang-example-hf6nh-1013521106  ext-namespace  7ad71c97-1148-4c59-aea4-84e8c56d19c4 26673606 1 2024-04-29 10:29:57 +0000 UTC <nil> <nil> map[workflows.argoproj.io/report-outputs-completed:false workflows.argoproj.io/workflow:workflow-hang-example-hf6nh] map[] [{argoproj.io/v1alpha1 Workflow workflow-hang-example-hf6nh af4266ff-e25f-4e4d-b073-dee4b6778da7 <nil> <nil>}] [] [{argoexec Update argoproj.io/v1alpha1 2024-04-29 10:29:57 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:workflows.argoproj.io/report-outputs-completed\":{},\"f:workflows.argoproj.io/workflow\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"af4266ff-e25f-4e4d-b073-dee4b6778da7\\\"}\":{}}}} }]},NodeResult:NodeResult{Phase:,Message:,Outputs:nil,Progress:,},}" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="task result name:\nworkflow-hang-example-hf6nh-1013521106" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="Marking task result incomplete workflow-hang-example-hf6nh-1013521106" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=info msg="task-result changed" namespace=ext-namespace nodeID=workflow-hang-example-hf6nh-1013521106 workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="task result:\n&WorkflowTaskResult{ObjectMeta:{workflow-hang-example-hf6nh-3630579407  ext-namespace  3fe40d94-0f39-45ee-aff6-902f6086b578 26673958 2 2024-04-29 10:30:36 +0000 UTC <nil> <nil> map[workflows.argoproj.io/report-outputs-completed:true workflows.argoproj.io/workflow:workflow-hang-example-hf6nh] map[] [{argoproj.io/v1alpha1 Workflow workflow-hang-example-hf6nh af4266ff-e25f-4e4d-b073-dee4b6778da7 <nil> <nil>}] [] [{argoexec Update argoproj.io/v1alpha1 2024-04-29 10:30:57 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:workflows.argoproj.io/report-outputs-completed\":{},\"f:workflows.argoproj.io/workflow\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"af4266ff-e25f-4e4d-b073-dee4b6778da7\\\"}\":{}}},\"f:outputs\":{}} }]},NodeResult:NodeResult{Phase:,Message:,Outputs:&Outputs{Parameters:[]Parameter{},Artifacts:[]Artifact{},Result:nil,ExitCode:nil,},Progress:,},}" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="task result name:\nworkflow-hang-example-hf6nh-3630579407" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="Marking task result complete workflow-hang-example-hf6nh-3630579407" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=info msg="task-result changed" namespace=ext-namespace nodeID=workflow-hang-example-hf6nh-3630579407 workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="Skipping artifact GC" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="Evaluating node workflow-hang-example-hf6nh: template: *v1alpha1.WorkflowStep (self-delete), boundaryID: " namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.429Z" level=debug msg="Resolving the template" base="*v1alpha1.Workflow (namespace=ext-namespace,name=workflow-hang-example-hf6nh)" tmpl="*v1alpha1.WorkflowStep (self-delete)"
time="2024-04-29T10:31:58.429Z" level=debug msg="Getting the template" base="*v1alpha1.Workflow (namespace=ext-namespace,name=workflow-hang-example-hf6nh)" tmpl="*v1alpha1.WorkflowStep (self-delete)"
time="2024-04-29T10:31:58.429Z" level=debug msg="Getting the template by name: self-delete" base="*v1alpha1.Workflow (namespace=ext-namespace,name=workflow-hang-example-hf6nh)" tmpl="*v1alpha1.WorkflowStep (self-delete)"
time="2024-04-29T10:31:58.430Z" level=debug msg="Node workflow-hang-example-hf6nh already completed" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.430Z" level=info msg="TaskSet Reconciliation" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.430Z" level=info msg=reconcileAgentPod namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.430Z" level=debug msg="Task results completion status: map[workflow-hang-example-hf6nh-1013521106:false workflow-hang-example-hf6nh-3630579407:true]" namespace=ext-namespace workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.430Z" level=debug msg="taskresults of workflow are incomplete or still have daemon nodes, so can't mark workflow completed" fromPhase=Running namespace=ext-namespace toPhase=Succeeded workflow=workflow-hang-example-hf6nh
time="2024-04-29T10:31:58.443Z" level=info msg="Workflow update successful" namespace=ext-namespace phase=Running resourceVersion=26673983 workflow=workflow-hang-example-hf6nh

Logs from in your workflow's wait container

N/A (wait container must be forcibly interrupted for this issue to appear)
jswxstw commented 2 months ago

It seems unreliable to determine whether the task result has been successfully reported through the AnnotationKeyReportOutputsCompleted.

shuangkun commented 2 months ago

After reproducing it, there are indeed some problems and need to think about how to fix it.

alexlatchford commented 2 months ago

Yeah we are just concluding an RCA after we had a cohort of workflows "stuck" in Running state as the upgrade of the workflow-controller was rolled out from v3.4.16 to v3.5.6. We had the same conclusion that likely https://github.com/argoproj/argo-workflows/pull/11947 was the root cause, but https://github.com/argoproj/argo-workflows/pull/12402 seems related too! In either case looks like it's isolated to a transient issue on our end due to incompatibility between argoexec:v3.4.16 sidecars inside the active pods of workflows running at the time of the upgrade and the new workflow-controller:v3.5.6 control plane! Thanks for the original report and the maintainers for all the effort here!

juliev0 commented 1 month ago

Is the primary case for this one in which the Controller itself issues a SIGKILL to the container because the container isn't responding fast enough to a SIGTERM?

If so, it seems like we need to indicate on the Controller side that if we are doing a SIGKILL not to wait for the WorkflowTaskResult for that task, right?

jesseanttila-cai commented 1 month ago

Is the primary case for this one in which the Controller itself issues a SIGKILL to the container because the container isn't responding fast enough to a SIGTERM?

The root cause for the pod interruption in my case was primarily related to node deprovisioning, as the environment where this issue appeared consistently uses EC2 Spot Instances along with an aggressive deprovisioning strategy for underutilized nodes. I believe that node-pressure eviction could also cause this issue, along with any other external condition that would result in a non-graceful pod termination.

juliev0 commented 1 month ago

Got it. That's interesting that node-deprovisioning and node-pressure eviction would result in SIGKILL rather than SIGTERM.

Joibel commented 1 day ago

Got it. That's interesting that node-deprovisioning and node-pressure eviction would result in SIGKILL rather than SIGTERM.

There are projects like NTH which should get you a SIGTERM with some time to do some work. Perhaps they're not being used here. For non-graceful termination though, we still need a solution.

I think we need to consider "Pod gone away" after a reasonable period (to allow for propogation of the WorkflowTaskResult) to be pod failure and mark the outputs as completed (with error) to allow the workflow to fail/retry.