Open tooptoop4 opened 11 months ago
the pod itself for that step is in Completed state.
So, to summarize, the Pod is "Completed" but the Step and the Workflow are both still showing as "Running", correct?
I'm imagining that the Controller is failing to process it (especially as it has surpassed the activeDeadlineSeconds
) or the Executor isn't reporting it correctly.
Since it happens very infrequently, this sounds like a very rare race condition.
correct. there is something in workflow controller logs below that caught my eye and makes me think its missing retry logic when receiving transient error from k8s control plane:
{"time":"2023-10-29T00:53:20.427334099Z","stream":"stderr","_p":"F","log":"time=\"2023-10-29T00:53:20.427Z\" level=info msg=\"Updated phase Running -> Succeeded\" namespace=auth workflow=mywf-1698540780
{"time":"2023-10-29T00:53:20.427365589Z","stream":"stderr","_p":"F","log":"time=\"2023-10-29T00:53:20.427Z\" level=info msg=\"Marking workflow completed\" namespace=auth workflow=mywf-1698540780
{"time":"2023-10-29T00:53:20.427425319Z","stream":"stderr","_p":"F","log":"time=\"2023-10-29T00:53:20.427Z\" level=info msg=\"Marking workflow as pending archiving\" namespace=auth workflow=mywf-1698540780
{"time":"2023-10-29T00:53:20.43283518Z","stream":"stderr","_p":"F","log":"time=\"2023-10-29T00:53:20.432Z\" level=info msg=\"cleaning up pod\" action=deletePod key=auth/mywf-1698540780-1340600742-agent/deletePod
{"time":"2023-10-29T00:53:20.440336606Z","stream":"stderr","_p":"F","log":"time=\"2023-10-29T00:53:20.440Z\" level=warning msg=\"Error updating workflow: Unauthorized Unauthorized\" namespace=auth workflow=mywf-1698540780
{"time":"2023-10-29T00:53:20.440382846Z","stream":"stderr","_p":"F","log":"E1029 00:53:20.440195 1 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"mywf-1698540780.17926ddbd5f9adba\", GenerateName:\"\", Namespace:\"auth\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Workflow\", Namespace:\"auth\", Name:\"mywf-1698540780\", UID:\"58fcb020-71bb-48ae-a170-add3f2ad283e\", APIVersion:\"argoproj.io/v1alpha1\", ResourceVersion:\"93158593\", FieldPath:\"\"}, Reason:\"WorkflowSucceeded\", Message:\"Workflow completed\", Source:v1.EventSource{Component:\"workflow-controller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.October, 29, 0, 53, 20, 427273658, time.Local), LastTimestamp:time.Date(2023, time.October, 29, 0, 53, 20, 427273658, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Unauthorized' (will not retry!)
{"time":"2023-10-29T00:53:20.441209382Z","stream":"stderr","_p":"F","log":"time=\"2023-10-29T00:53:20.441Z\" level=warning msg=\"failed to clean-up pod\" action=deletePod error=Unauthorized key=auth/mywf-1698540780-1340600742-agent/deletePod
{"time":"2023-10-29T00:53:40.359148369Z","stream":"stderr","_p":"F","log":"time=\"2023-10-29T00:53:40.358Z\" level=info msg=\"cleaning up pod\" action=killContainers key=auth/mywf-1698540780-main-3299532583/killContainers
We've had this happen on 8 jobs yesterday. We've noticed that for each one of those:
status.conditions[].PodRunning
was set to False but status.conditions[].Completed
was missing.workflows.argoproj.io/completed
was "false"null
phase
is marked as Succeeded
progress
is however showing 51/52
taskResultsCompletionStatus
of the specific task was set to False
WorkflowTaskResult
's workflows.argoproj.io/report-outputs-completed
was also "false"
outputs.exitCode
was "0"
phase
was "Error"
progress
was "0/1"
message
was message: OOMKilled (exit code 137)
The problem is exacerbated by the Deadline and it snowballs on following jobs as they get stuck in Pending with the following message "Workflow processing has been postponed because too many workflows are already running"
same problem in 3.5.2 the pod Completed but the workflow is Running or even not display the status (wait container log seem fine). any suggestion?
@ZeidAqemia @zqhi71 do your controller logs have any errors like the ones in https://github.com/argoproj/argo-workflows/issues/12103#issuecomment-1784297997 ? what % of workflows are affected? what version are u running?
for me on v3.4.11 this affects less than 0.01% of workflows
@ZeidAqemia @zqhi71 do your controller logs have any errors like the ones in #12103 (comment) ? what % of workflows are affected? what version are u running?
for me on v3.4.11 this affects less than 0.01% of workflows
v3.5.2 here and it's 100% of workflows where one task OOMs
Hi guys. We found just the default controller settings is not suitable for thounds of cronworkflow. When we adjust --cronworkflow-worker, qps and burst, the cronworkflow works fine. If someone have the same problem maybe adjust settings following this documents (https://argo-workflows.readthedocs.io/en/latest/scaling/) will help.
This seems related to Hang on "Workflow processing has been postponed due to max parallelism limit" #11808
I'm seeing the same issue when using namespaceParallelism
Hi @tooptoop4 Were you able to make any progress on this issue, or is it still reoccurring?
it still reoccurs but rare: roughly 1 in 20000 workflow runs. did u see the log in https://github.com/argoproj/argo-workflows/issues/12103#issuecomment-1784297997 ?
it still reoccurs but rare: roughly 1 in 20000 workflow runs. did u see the log in https://github.com/argoproj/argo-workflows/issues/12103#issuecomment-1784297997 ?
Yes, but in my case, we are facing the same stuck in Running
but not seeing any server issue related message from the log 😭
i wonder if https://github.com/argoproj/argo-workflows/pull/12233 helps @tczhao
Hi @tooptoop4, we tried https://github.com/argoproj/argo-workflows/pull/12233 but it doesn't help in my case. I highly suspect the root cause in my case is due to https://github.com/argoproj/argo-workflows/issues/12997, we typically observe childnode missing when workflow stuck in running in my situation Should have a PR ready in a few days with more explanation
We also observed this in a small portion of our workflows. The workflows all have 5 tasks defined in their spec. On the ones that are running, in the status there are suddenly 6 or 7 workflow results.
We also did a log search. The 5 tasks that are reported as completely are found in the controller logs. For the 2 additional tasks that are not finished, there is no single log. In the UI there is also no trace of them. Or in any other status field.
We are using version v3.5.6
any idea? @jswxstw @shuangkun
any idea? @jswxstw @shuangkun
After reading the background of the issue, and it seems that your situation is different from others.(not similar to #12993)
I see a lot of logs like: {"time":"2023-10-29T00:56:00.886969281Z","stream":"stderr","_p":"F","log":"time=\"2023-10-29T00:56:00.886Z\" level=info msg=\"Workflow processing has been postponed due to max parallelism limit\" key=auth/mywf-1698540780
Does the workflow stuck in Running
state failed to be processed because of it?
We're also seeing this issue only in 3.5.x versions. I initially tried to upgrade and saw this issue on an earlier 3.5.x. It's been a month or so, so I tried again with 3.5.8, and I'm still seeing the issue. This is with any workflow I try to run - steps, dags, containers and both invoked from workflow templates or crons (although I doubt that matters).
@sstaley-hioscar, could you verify what you see in the wait container logs from one of these runs confirms it is using workflowtaskresults, and that the controller itself has appropriate RBAC to read the workflowtaskresults as you've said you have custom RBAC.
@Joibel
here are some logs from the wait container:
time="2024-06-22T17:12:38.814Z" level=info msg="Starting Workflow Executor" version=v3.4.8
time="2024-06-22T17:12:38.819Z" level=info msg="Using executor retry strategy" Duration=1s Factor=1.6 Jitter=0.5 Steps=5
...
time="2024-06-22T17:12:40.821Z" level=info msg="stopping progress monitor (context done)" error="context canceled"
It looks like it's using the wrong version. I'll look into that.
@Joibel It looks like that was the issue. I can delete some of those example templates to prevent cluttering up this thread, if you like.
It looks like I'm running into a new error though. Now workflows is now attempting to use the service account of the namespace the workflow is running in to patch pods:
\"system:serviceaccount:monitoring:default\" cannot patch resource \"pods\" in API group \"\" in the namespace \"monitoring\""
which wasn't what our rbac was set up for in the previous version. Is this expected new behavior or is there a configuration I need to set to make the controller use its own token for these API calls?
It looks like it's using the wrong version. I'll look into that.
@sstaley-hioscar This is the root cause, because wait container with version v3.4.8
does not write the LabelKeyReportOutputsCompleted
label in WorkflowTaskResult
.
When upgrading from version 3.5.1 or below directly to version 3.5.5 or above, if there are running workflows in the cluster, these workflows will stuck in Running
, even though their pods are Completed
.
It looks like it's using the wrong version. I'll look into that.
@sstaley-hioscar This is the root cause, because wait container with version
v3.4.8
does not write theLabelKeyReportOutputsCompleted
label inWorkflowTaskResult
.When upgrading from version 3.5.1 or below directly to version 3.5.5 or above, if there are running workflows in the cluster, these workflows will stuck in
Running
, even though their pods areCompleted
.
If you're upgrading from a version which does not record TaskResultCompletionStatus
in the status block of the workflow, to one that does, the nodes will remain in Running
despite the pods being Completed
.
This is because of this choice from #12537, which means missing TaskResultCompletionStatus
entries are always going to be regarded as incomplete.
This blocks the controller from making any progress, and means upgrades over this with running workflows will always fail to complete inflight workflows.
I found other issue, the task of the wf had been cleaned, but the wf is always running.
root@10-16-10-122:/home/devops# kubectl get wf -n argo-map mapping-pipeline-1524786-1720665764
NAME STATUS AGE MESSAGE
mapping-pipeline-1524786-1720665764 Running 14h
root@10-16-10-122:/home/devops# kubectl get pods -n argo-map | grep mapping-pipeline-1524786-1720665764
root@10-16-10-122:/home/devops#
root@10-16-10-122:/home/devops#
root@10-16-10-122:/home/devops#
the "taskResultsCompletionStatus" of wf is:
taskResultsCompletionStatus:
mapping-pipeline-1524786-1720665764-1391268475: false
mapping-pipeline-1524786-1720665764-3212493707: false
logs of the argo controller is:
time="2024-07-12T03:31:08.133Z" level=debug msg="taskresults of workflow are incomplete or still have daemon nodes, so can't mark workflow completed" fromPhase=Running namespace=argo-map toPhase=Succeeded workflow=mapping-pipeline-1524786-1720665764
workflow stuck in Running state, even though the only pod for it is Error.
root@10-16-10-122:/home/devops# kubectl get wf -n argo-map-benchmark localization-benchmark-v2-7101-1720600493
NAME STATUS AGE MESSAGE
localization-benchmark-v2-7101-1720600493 Running 2d
root@10-16-10-122:/home/devops# kubectl get pods -n argo-map-benchmark | grep localization-benchmark-v2-7101-1720600493
localization-benchmark-v2-7101-1720600493-single-1265870429 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-129032789 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-1871596469 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-2269257704 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-2366564762 0/2 UnexpectedAdmissionError 0 2d
localization-benchmark-v2-7101-1720600493-single-2375535511 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-2612496173 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-2626411911 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-2769513904 0/2 UnexpectedAdmissionError 0 2d
localization-benchmark-v2-7101-1720600493-single-3116854153 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-3308776981 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-3926467371 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-4172479490 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-4220742789 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-single-4221498464 0/2 UnexpectedAdmissionError 0 2d
localization-benchmark-v2-7101-1720600493-single-45705550 0/2 Error 0 2d
localization-benchmark-v2-7101-1720600493-summary-406718820 0/2 UnexpectedAdmissionError 0 47h
https://github.com/argoproj/argo-workflows/blob/5aac5a8f61f4e8273d04509dffe7d80123ff67f5/workflow/controller/taskresult.go#L67 If the status of the pod is error, I think the taskresults is completed.
We're experiencing the same issue, I believe (v3.5.7
). Using Terminate
or Stop
does no help.
What action should we take in these cases? Is simply deleting the Workflow a safe approach, if we can?
https://github.com/argoproj/argo-workflows/pull/13332 it can not fix the issue, when the argo version upgrade from v3.4.9 to v3.5.8. the pod status is completed, but the task under the wf of taskResultsCompletionStatus is always 'false'. @Joibel
other issue: pod is completed,but the status of the task of wf is 'false', and the status of the wf is running
taskResultsCompletionStatus:
prod--prod--lt-filter-1-1-95--2692e7e1-30e5-44a5-977a-d5c32dmh5-2722797653: false
root@10-16-10-122:/home/devops# kubectl get pods -n argo-data-closeloop | grep -i comple
prod--prod--lt-filter-1-1-95--2692e7e1-30e5-44a5-977a-d5c32dmh5-job-entrypoint-2722797653 0/2 Completed 0 14d
I don't agree the pr https://github.com/argoproj/argo-workflows/pull/13332 can fix it.
We're experiencing the same issue, I believe (
v3.5.7
). UsingTerminate
orStop
does no help. What action should we take in these cases? Is simply deleting the Workflow a safe approach, if we can?
I usually use the following command to recover the stuck workflow:
# find the name of workflowtaskresult belong to the completed pod.
kubectl label workflowtaskresult ${taskResultName} workflows.argoproj.io/report-outputs-completed=true
@stefanondisponibile Maybe you can try it, rather than deleting the workflow.
https://github.com/argoproj/argo-workflows/pull/13332 it can not fix the issue, when the argo version upgrade from v3.4.9 to v3.5.8. the pod status is completed, but the task under the wf of taskResultsCompletionStatus is always 'false'.
@zhucan You are right. taskResultsCompletionStatus
is always false if label workflows.argoproj.io/report-outputs-completed
is false
or missing.
https://github.com/argoproj/argo-workflows/blob/d7495b83b519e0c39b49fe692485e95286ce6665/workflow/controller/taskresult.go#L66-L73
other issue: pod is completed,but the status of the task of wf is 'false', and the status of the wf is running
taskResultsCompletionStatus: prod--prod--lt-filter-1-1-95--2692e7e1-30e5-44a5-977a-d5c32dmh5-2722797653: false root@10-16-10-122:/home/devops# kubectl get pods -n argo-data-closeloop | grep -i comple prod--prod--lt-filter-1-1-95--2692e7e1-30e5-44a5-977a-d5c32dmh5-job-entrypoint-2722797653 0/2 Completed 0 14d
I don't agree the pr #13332 can fix it.
I agree. It won't fix an issue of taskResultsCompletionStatus
being false. That's often caused by missing/incorrect RBAC.
Other thoughts: you may have set your executor to an old version?
PR #13332 only fixes the issue of taskResultsCompletionStatus
being completely missing for a node.
Check the executor (wait container) logs and check you're getting WorkflowTaskResults
being created by your pods.
taskResultsCompletionStatus
being completely missing for a node.
@Joibel 🤔Can you explain In which case this will happen?
thank you @jswxstw, setting workflows.argoproj.io/report-outputs-completed=true
mitigates the issue :pray:
@Joibel 🤔Can you explain In which case this will happen?
Per the respective issue and PR, it's only during an upgrade, e.g. your Controller is now 3.5.x but you still have some Workflows prior to the upgrade running Executor 3.4.x
I have analyzed this issus before like https://github.com/argoproj/argo-workflows/issues/12103#issuecomment-2185487682. According to my analysis, this will only result in label LabelKeyReportOutputsCompleted
being completely missing, but taskResultsCompletionStatus
will always be false
after taskResultReconciliation
.
https://github.com/argoproj/argo-workflows/blob/d7495b83b519e0c39b49fe692485e95286ce6665/workflow/controller/taskresult.go#L66-L73
I can't think of a scenario where taskResultsCompletionStatus
would be missing since WorkflowTaskResult
is always created by wait container.
I'll let Alan check in more detail; sorry I didn't go through the whole thread too closely, just thought I'd answer an outstanding question I stumbled upon.
We are having a similar issue where if upon a fanout only a few jobs fail, the workflow stays in running state, despite the fact that the exit handler has been called
We have seen similar problems. In our case, in a workflow with steps where at least one step retried, we see sporadic occurrences of all steps appear as completed successfully, and the workflow shows as green in the UI, but kubectl -n argo get <workflow>
shows it as Running
.
We have also seen cases where one or more steps failed, and workflow correctly appears as red in the UI, and yet kubectl -n argo get <workflow>
shows it as Running
.
We have seen this behaviour occasionally in sporadic workflows, ever since upgrading to 3.5.6. I don't recall seeing this with 3.5.5 (but I might be wrong about that - not sure)
In 3.4 WorkflowTaskResults
have an owner reference of their Pod
, so can get deleted when a Pod is deleted. In 3.5 they are owned by the Workflow, which is more sensible. During an upgrade we can lose the results, and therefore hang waiting for them.
completed=true
. Tagging here too @isubasinghe as he wrote it.I'm unsure this is the last fix for these issues.
I'm unsure this is the last fix for these issues.
https://github.com/argoproj/argo-workflows/pull/12574#issuecomment-1914797862 @shuangkun Can you take look at this? WorkflowTaskResult
will always be incomplete if pod interrupted because of OOM killed or evicted and then workflow will stuck Running
.
I don't understand why this issue is marked as only P3.
We are seeing multiple occurrences of workflows with one or more steps that retry one or more times (e.g. due to OOMKilled), where the retries eventually succeed and the workflow appears as green in the UI, but still shows as Running
in the UI in the workflows table at the /workflows
page (and also shows as Running
via kubectl -n argo get workflow
.
This incurs a really bad ripple effect on any systems monitoring the status of Argo Workflows.
Surely this should be treated as a Priority 1 issue?
@yonirab The scenario described in this issue is different from your situation; a more relevant issue should be #12993 or #13373.
@jswxstw I discussed your comment above and the OOM scenario with @isubasinghe last night during the contributor meeting and he suspected that #13454 missed the scenario of a Pod that failed/errored but was not yet GC'd (so exists, not yet absent).
Retry scenarios might also need some specific code (Pod errored but user has a retryStrategy
)
Retry scenarios might also need some specific code (Pod errored but user has a
retryStrategy
)
@agilgur5 Pod errored but user has a retryStrategy
is indeed our exact scenario.
This incurs a really bad ripple effect on any systems monitoring the status of Argo Workflows.
Surely this should be treated as a Priority 1 issue?
This variant seems to have 0 upvotes and so is potentially a more specific case that is rarer.
@jswxstw I discussed your comment above and the OOM scenario with @isubasinghe last night during the contributor meeting and he suspected that #13454 missed the scenario of a Pod that failed/errored but was not yet GC'd (so exists, not yet absent).
Retry scenarios might also need some specific code (Pod errored but user has a
retryStrategy
)
I have submitted a new PR #13491 to fix this scenario.
We are having a similar issue with one of our workflows. It is a very simply container but it is not finalizing even though it has completed. The output of the wait container shows this:
time="2024-09-20T15:37:02 UTC" level=info msg="Starting Workflow Executor" version=v3.5.6
time="2024-09-20T15:37:02 UTC" level=info msg="Using executor retry strategy" Duration=1s Factor=1.6 Jitter=0.5 Steps=5
time="2024-09-20T15:37:02 UTC" level=info msg="Executor initialized" deadline="0001-01-01 00:00:00 +0000 UTC" includeScriptOutput=false namespace=files-api-stage podName=files-api-workflow-template-h4bfg templateName=files-api-workflow version="&Version{Version:v3.5.6,BuildDate:2024-04-19T20:54:43Z,GitCommit:555030053825dd61689a086cb3c2da329419325a,GitTag:v3.5.6,GitTreeState:clean,GoVersion:go1.21.9,Compiler:gc,Platform:linux/amd64,}"
time="2024-09-20T15:37:02 UTC" level=info msg="Starting deadline monitor"
time="2024-09-20T15:37:04 UTC" level=info msg="Main container completed" error="<nil>"
time="2024-09-20T15:37:04 UTC" level=info msg="No Script output reference in workflow. Capturing script output ignored"
time="2024-09-20T15:37:04 UTC" level=info msg="No output parameters"
time="2024-09-20T15:37:04 UTC" level=info msg="No output artifacts"
time="2024-09-20T15:37:04 UTC" level=info msg="Alloc=9786 TotalAlloc=13731 Sys=25957 NumGC=3 Goroutines=8"
time="2024-09-20T15:37:04 UTC" level=info msg="stopping progress monitor (context done)" error="context canceled"
Update: I found out why our pods were hanging. We use Linkerd
injection for security and if that happened on a workflow job it would fail to complete because the linkerd proxy pod would remain running inside the job. We needed to add the follow into the template for the workflows to complete successfully:
metadata:
annotations:
linkerd.io/inject: disabled
We use
Linkerd
injection for security and if that happened on a workflow job it would fail to complete because the linkerd proxy pod would remain running inside the job
The Pod wouldn't be Completed
then, it'd be stuck in Running
, so that's different from this issue.
See also the "Sidecar Injection" page of the docs
We needed to add the follow into the template for the workflows to complete successfully:
metadata: annotations: linkerd.io/inject: disabled
@jeverett1522
we had a similar issue and used the following kill command for argo workflows
workflows.argoproj.io/kill-cmd-linkerd-proxy: ["/usr/lib/linkerd/linkerd-await","sleep","1","--shutdown"]
With this pod annotation added to the workflow the Linkerd-proxy is killed after the workflow finishes
Pre-requisites
:latest
What happened/what you expected to happen?
the workflow is running for more than 20 hours even though i have activedeadlineseconds set at 12 hours
the workflow just has a single step which also shows as 'running' in the argo ui. but looking at the logs of it shows that it has complete the code that i expect for that step and also shows
time="2023-10-29T00:53:09 UTC" level=info msg="sub-process exited" argo=true error="<nil>"
at the end of the main log. the pod itself for that step is in Completed state.there are other workflows that have completed as expected during this time, and no other workflows running right now. note this exact workflow has successfully run 1000s of times in the past so i know my spec/permissions are correct.
Version
3.4.11
Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.
Logs from the workflow controller
Logs from in your workflow's wait container