radical-collaboration / hpc-workflows

NSF16514 EarthCube Project - Award Number:1639694
5 stars 0 forks source link

Best strategy for jobs with dynamic allocation #140

Closed Weiming-Hu closed 3 years ago

Weiming-Hu commented 3 years ago

I'm wondering what would be the best practice for a workflow with dynamic allocation. For example, let's say I have 10 tasks in a stage, but they have different computational costs. One task might take 3 hours while the other ones only take 1 hour. If I configure full parallelization (enough resources for all tasks to run at the same time), I will basically waste 2 hours from those tasks finishing earlier.

I'm wondering whether EnTK currently has any feature regarding this functionality. Or I would greatly appreciate it if you have any suggestions for improving the efficiency of this type of workflow.

Thank you

andre-merzky commented 3 years ago

Hi @Weiming-Hu: we do not have any capabilities to help shaping the pilot size. As a rule of thumb though: the wider the pilot, the smaller the total time of execution; the smaller the pilot, the better the resource utilization. It boils really down to a tradeoff between those two metrics, and what value you want them to have. I should emphasize though that the stack is built to run many tasks, and it is likely that for very small number of tasks we won't be able to pack very efficiently.

Weiming-Hu commented 3 years ago

When you say many tasks, could you mention a number for me just to have a general idea. Thank you.

andre-merzky commented 3 years ago

No fixed number, but it relates to your question: we expect to use any specific core or GPU for multiple tasks. So we basically expect that a set of task get placed on the resources, some of them finish, get replaced with new tasks, and so on, for several or many rounds of replacement.

Consider this: if we would only use each resource for a single task, then using a batch script which starts all tasks concurrently has the very same effect, and RCT would only pose an overhead.

Weiming-Hu commented 3 years ago

That makes sense to me. I found myself in the second position where you mention that each resource is used by one task.

But I still prefer EnTK because I can code up my workflow to request a dynamic number of cores based on the expected computation. If I have to set that manually that would be a pain for me (50+ tasks in a single stage). It is also going to be a lot hard to maintain the functionality with bash script rather than everything nicely packed in python.

With that being said, I guess to improve the efficiency, I just have to request fewer resources?

andre-merzky commented 3 years ago

I don't want to discourage you from using EnTK, obviously :-) But yes, for tasks with widely varying runtimes I would suggest to use a smaller pilot so that tasks can shuffle in and out a number of times.