Parallel-in-Time / time4apint

Mixing GFM method with PinT task scheduling
https://jupyterhub.mat.tu-harburg.de/blockops/
0 stars 0 forks source link

Arguments for `BlockIteration.speedup` method #10

Closed tlunet closed 1 year ago

tlunet commented 1 year ago

Hey @JensHahne, just to check with you : I would like to make the speedup method return a tuple (speedup / efficiency) (and eventually, rename it getPerformance, so this would simplify the implementation and use.

I see that for now you provide also the number of processors nProc as argument : but shouldn't this be already determined from the number of block and the schedule_type argument ? Like for BLOCK-BY-BLOCK, nProc=N, for LCF, nProc is determined by the scheduler, etc ... ?

tlunet commented 1 year ago

PS : actually, if I understand correctly, I think the getPerformance method could return the speedup, efficiency and number of procs for one given number of iteration per block, right ?

JensHahne commented 1 year ago

Sure, a method getPerformance makes probably more sense.

I think the argument is necessary when using other methods and/or schedules. I think it is not always the case that you use the same number of processors as blocks, for MGRIT this is quite unusual. Also, maybe you only have a limited number of resources.

From my perspective it should be, i have this problem and this resources, give me the best performance (in terms of speedup or efficiency). And not, you could achieve this speedup using this many processors.

tlunet commented 1 year ago

Ok I see. But doesn't it clash with the Optimal scheduling idea (with unlimited number of processes, what is the best speedup I can get ...) ?

What about cutting the pear in half : make nProc an optional argument, that can be "forced" by the user ? (even if with a BLOCK-BY_BLOCK schedule, I'm not sure how we can have less processors than blocks ...).

JensHahne commented 1 year ago

Yes, the optimal one is a special case. It is not really "usuable", it should give you and idea how an optimal schedules could look like. But this is a completly theoretical schedule, which could not really be implement (other than the other schedules).

For BLOCK-BY-BLOCK you put multiple consecutive blocks on the same processor. This is quite usual for MGRIT. PFASST often uses the windowing strategy instead, but both are valid implementations.

tlunet commented 1 year ago

So ... nProc as optional, with default value determined by the Scheduler ? From my point of view, it should allows to consider any scheduling strategies using the same interface. What do you think ?

JensHahne commented 1 year ago

I am not a huge fan. I think the number of processors is quite important. I agree that you should consider each strategie, but if the user don't understand that the one strategy is with x processors, and the other with x+1 processors, this could be quite confusing.

But i think this is more a general question what we want. I would go with "problem" and "resources" as input, and output me the best strategy to achieve best speedup/efficiency. I think your ansatz is more like, only "problem" as input and give me an overview what is possible.

We can try it out, in the worst case it is not too hard to change it back

tlunet commented 1 year ago

Ok, I'll push soon some commit in this direction, along with some modifications for #11 ...