flyteorg / flyte

Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
https://flyte.org
Apache License 2.0
5.78k stars 659 forks source link

[BUG] Subworkflow status is not reported to the parent workflow #5335

Closed pablocasares closed 3 weeks ago

pablocasares commented 6 months ago

Describe the bug

After upgrading our cluster to v1.12.0 some workflow executions got stuck in "running" state when a subworkflow node failed. It seems that the failed state from the subworkflow is not being reported to the parent workflow so the workflow gets stuck in running state.

We rolled back flytepropeller to v1.11.0 and the issue disappeared. Looks like a race condition issue because it happens from time to time and restarting the workflow seems to mitigate the problem.

Expected behavior

The subworkflow status should be reported back to the parent workflow.

Additional context to reproduce

No response

Screenshots

image

Logs keeps saying LaunchPlan running, parallelism is now set to [X] - Node Event [failing] seems missing.

Are you sure this issue hasn't been raised already?

Have you read the Code of Conduct?

runllm[bot] commented 6 months ago

The issue you described, where a subworkflow's failed state is not reported to the parent workflow, causing the parent to remain in a "running" state, is not explicitly covered in the provided data sources. However, similar issues related to subworkflow status reporting and handling have been documented. For instance, there was a problem where a timed-out task in a subworkflow did not cause the workflow to fail (source). Another related issue involved aborted child workflows not properly reporting the root cause of the abort (source).

These cases suggest that Flyte has had challenges with accurately handling and reporting the status of subworkflows under certain conditions. Your specific issue might be related to these known problems, but without explicit mention in the sources, it's difficult to provide a definitive answer or solution based solely on the provided data.

Would you like more detailed information on how similar issues have been addressed in the past?

Sources:


This is an AI-generated response and your feedback is appreciated! Please leave a 👍 if this is helpful and 👎 if it is not.

Mention @runllm to ask follow-up questions. See runllm.com for more!

kumare3 commented 6 months ago

This is a pretty serious bug? cc @pvditt

pvditt commented 6 months ago

@pablocasares thank you for pointing this out. Were you able to determine if the workflow was still getting executed by propeller or was it just the console showing it as running? (Looking to see if this is just an eventing/persisting of state to admin bug)

Update: seems it's still running as you're still seeing logs.

pvditt commented 6 months ago

@pablocasares My initial thoughts were this had to do with the cache not getting updated but I'm not noticing anything while stepping through the code + looking at the updates between 1.11.0 and 1.12.0

Also to clarify, are you observing this behavior with a parent workflow starting subworkflows or external workflows via launch plans?

Are you able to provide a workflow that could reproduce the error? I'm unable to repro running on flyte sandbox.

pablocasares commented 6 months ago

Hi @pvditt, thank you for taking a look at this.

We noticed the issue in workflows that have external subworkflows via launch plans.

We weren't able to reproduce it either because it doesn't happen in every execution. It seems to be happening only in some of the executions of the workflows but those workflows doesn't fail consistently. Due to our high load we hit the case sometimes.

After we downgraded propeller to v1.11.0 yesterday this issue did not happen again and the subworkflow tasks that were stuck on "Running" went to "Failed" as expected.

Also yesterday after the downgrade to v1.11.0 we noticed another issue that might be related with this. I'm not sure if this helps but I will share it just in case. In one workflow execution the subworkflow failed and the parent failed after that but one node got stuck in "Running" state and the error message shown in flyteconsole was:

Workflow[sp-annotation-data:development:sp_annotation_data.workflows.upload_samples_workflow.upload_samples_workflow] failed. RuntimeExecutionError: max number of system retry attempts [51/50] exhausted. Last known status message: Workflow[sp-annotation-data:development:sp_annotation_data.workflows.upload_samples_workflow.upload_samples_workflow] failed. CausedByError: Failed to propagate Abort for workflow. Error: 0: 0: 0: [system] unable to read futures file, maybe corrupted, caused by: [system] Failed to read futures protobuf file., caused by: path:gs://mybucket/metadata/propeller/wf-dev-4a61a44033f/n6-n5/data/0/futures.pb: not found
1: 0: [system] unable to read futures file, maybe corrupted, caused by: [system] Failed to read futures protobuf file., caused by: path:gs://mybucket/metadata/propeller/wf-dev-4a61a44033f/n6-n5/data/0/futures.pb: not found

Please note that we are on Flyte Admin v1.12.0 and Propeller v1.11.0 and we noticed it just for this case. We can not confirm that this is happening when both versions are in v1.12.0. I'm sharing this just in case it helps you to identify the issue.

Thank you.

pvditt commented 6 months ago

@pablocasares thank you for the added info. And just to circle back/confirm,

pablocasares commented 6 months ago

The parent workflow was still getting executed but stuck because it thought that the subworkflow node was still running (you can check the yaml I sent in our internal Slack channel)

Yes, we have workflows with external subworkflows failing that are correctly handled on 1.12.0.

Yes, aborted the workflow and then relaunch. As I said, this happen only in some of the executions. The external subworkflow failing is needed for this to happen but having a external subworkflow failing doesn't mean the issue happens. In other words, this happens sometimes when the external subworkflow fails. In some executions the external subworkflow fails and it is handled properly.

pvditt commented 6 months ago

@pablocasares thank you for the follow up. Apologies for the mix up - was just added to the slack channel. Let me look back into this.

pvditt commented 6 months ago

@pablocasares would you still have access to your propeller logs? If so, can you check if Retrieved Launch Plan status is nil. This might indicate pressure on the admin cache. was getting logged when you noticed the issue w/ propeller v.1.12.0

pablocasares commented 6 months ago

Hi again @pvditt, I checked the logs and I can see that message happening when we had propeller v1.12.0 and even now with v1.11.0. Seems to be happening several times per minute.

pvditt commented 6 months ago

@pablocasares

I think we've potentially pinned the problem down, but am having difficulty reproducing the race condition. Would you still have access to the flyteadmin logs from when child/external workflow were not propagating status to their parent workflow? Interested to see if you're seeing continued polling to GetExecution for the execution_id of a subworkflow by the admin-launcher's cache update loop.

ie:

2024/05/16 01:11:16 /Users/pauldittamo/src/flyte/flyteadmin/pkg/repositories/gormimpl/execution_repo.go:44
[15.565ms] [rows:1] SELECT * FROM "executions" WHERE "executions"."execution_project" = 'flytesnacks' AND "executions"."execution_domain" = 'development' AND "executions"."execution_name" = 'fh662hcses4ry1' LIMIT 1

Note, there could still be logs showing this occasionally as other parts of Flyte such as console will ping this endpoint. Looking to see if you don't see continued logs at the cadence of ~cache sync cycle duration (defaults to 30s) while the parent workflow is stuck in running.

pablocasares commented 6 months ago

Hi @pvditt thanks for the update.

I did a quick search on the logs and I found only 1 line with the parent wf execution id:

SELECT * FROM "executions" WHERE "executions"."execution_project" = 'key-metrics-pipelines' AND "executions"."execution_domain" = 'production' AND "executions"."execution_name" = 'nfu3ftwauvzag7e23pgx' LIMIT 1

I used the same filter with the subworkflow execution id and I couldn't find any line.

I don't see continued logs for the parent execution id (just 1 line) and no logs at all for the child wf.

RRap0so commented 4 months ago

Hey friends!

We've tested this with the RC candidate. Things seem somewhat better but we're still seeing the same problem but this time the pipelines eventually finish at around the 1h.

We couldn't reproduce it but found a couple of runs where the Remote LaunchPlan just stay stuck but checking the sub execution everything is green.

pvditt commented 4 months ago

@RRap0so thank you for testing this out.

What value do you have set for downstream-eval-duration ?

RRap0so commented 4 months ago

downstream-eval-duration: 10s

pvditt commented 4 months ago

@RRap0so thank you again.

With those couple runs that were stuck, did they resolve themselves to the correct state without any intervention?

Also, did those instances occur during times when there were a higher number of external sub-workflows running across that propeller instance? Would you still have access to the metrics from when those workflows were stuck? I'd be interested in admin-launcher:SyncErrors and admin-launcher:Size

RRap0so commented 4 months ago

They resolved themselves (taking roughly 1h 20 min). The way we have things setup is that most workflows we're running use sub-workflows. We have a some shared workflows within the org where folks use to do dependency checking etc... So we're always using a huge number of external sub-workflows.

I'll try to find those metrics for you.

pvditt commented 3 weeks ago

https://github.com/flyteorg/flyte/commit/d0628249f5935542c1a09f0f257e8274a7eef5c7