kube-HPC / hkube

🐟 High Performance Computing over Kubernetes - Core Repo 🎣
http://hkube.io
MIT License
305 stars 20 forks source link

Dynamic algorithm resource allocation for algorithms/tasks #1172

Closed mattanf2 closed 2 years ago

mattanf2 commented 3 years ago

The ability to define how much memory/cpu an algorithm will run with during run time. Perhaps supplying that value from the input of a previous algorithm.

I have a case with a pipeline that contains 2 tasks: mosaic construct and mosaic render.

There are 2 features I can see that will fix this issue The first and best option for me is that HKube woull implement a feature where I can define the amount of cpu/mem/gpu a task will use based on the input of a previous task

The second option that is kind of a hack and requires some explaining: Attempting to fix this problem, I tried to create a pipeline with several Mosaic Render algorithms each one running with different amounts of memory (5gb, 10gb, 20gb, .., 40gb) and running as a batch (accepting an array from Mosaic Construct). Then I tried to place the insturctions to render an image to the correct array based on the amount of memory I needed. This didn't work as hkube attempts to preload all algorithms simultaneously and either wasting even more memory or getting stuck because It couldn't load all the Mosaic Render algorithms together.

If you could supply a flag in a pipeline that says don't preload an algorithm called as an array before you know that that array contains values I could use this technique to reduce memory consumption

mattanf2 commented 3 years ago

Edit: If you and up going for the implementation where the resource of an algorithm is set based on another there are a couple of other features that should probably go with it such including information on available computers (number of computers, number of cores/mem in each one) in the input to algorithm

maty21 commented 2 years ago

after investigation, we decided not to implement that feature we also suggested a couple of alternatives tackle this issue