flyteorg / flyte

Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
https://flyte.org
Apache License 2.0
5.47k stars 584 forks source link

[Core feature] Sub-executions should respect LaunchPlan MaxParallelism if set #5555

Open RRap0so opened 2 months ago

RRap0so commented 2 months ago

Motivation: Why do you think this is important?

There could be many reasons why a remote LaunchPlan have the MaxParallelism set, I propose that any Sub-Execution should respect that limit instead of using the MaxParallelism of the main execution.

Goal: What should the final outcome look like, ideally?

If Workflow A with maxParallelism of 100 triggers LaunchPlan B that have a maxParallelism of 1 then the Sub execution should have a MaxParallelism of 1.

Describe alternatives you've considered

Haven't considered any other idea

Propose: Link/Inline OR Additional context

No response

Are you sure this issue hasn't been raised already?

Have you read the Code of Conduct?

runllm[bot] commented 2 months ago

This issue is related to ensuring that sub-executions respect the max_parallelism setting of their respective LaunchPlan rather than inheriting the max_parallelism of the parent workflow. This feature is crucial for maintaining control over the parallelism settings of individual workflows, especially when different workflows have different resource requirements or operational constraints.

According to this discussion and this GitHub issue, there are ongoing concerns and reports about the max_parallelism setting not being respected in certain scenarios, such as within dynamic workflows or subworkflows. The goal is to ensure that if Workflow A with max_parallelism of 100 triggers LaunchPlan B with max_parallelism of 1, then LaunchPlan B should execute with a max_parallelism of 1.

Would you like more details or a specific example on how to set max_parallelism for sub-executions?

Sources:


This is an AI-generated response and your feedback is appreciated! Please leave a 👍 if this is helpful and 👎 if it is not.

Mention @runllm to ask follow-up questions. See runllm.com for more!