Closed depombo closed 9 months ago
I think one of the earlier ideas behind the graph and not-quite-Turing-complete nature of the language is the answer here: we should be computing a rough estimate of the time an opcode will take under various parallelization conditions and choosing the one that is the best versus the overall state of the VM across threads (eg, if there's lots of concurrent activity happening already, we know that scheduling overhead will start to dominate gains from parallelization, while a mostly "empty" VM can fan out as far as makes sense, where the execution prediction time by parallelization informs "how parallel" to go).
As for now, before we're ready to start tackling that whole part of the language? I would bias to more parallelism over less, tbh.
I think one of the earlier ideas behind the graph and not-quite-Turing-complete nature of the language is the answer here: we should be computing a rough estimate of the time an opcode will take under various parallelization conditions and choosing the one that is the best versus the overall state of the VM across threads (eg, if there's lots of concurrent activity happening already, we know that scheduling overhead will start to dominate gains from parallelization, while a mostly "empty" VM can fan out as far as makes sense, where the execution prediction time by parallelization informs "how parallel" to go).
As for now, before we're ready to start tackling that whole part of the language? I would bias to more parallelism over less, tbh.
That is a very good point and I am glad you brought it up. We should be able to eventually beat a work stealing mechanism by estimating the resources required across the board and this would only be possible in Alan. The bias for parallelism is tricky though because I think it is a spectrum. Things are already running it parallel since https://github.com/alantech/alan/pull/384 and https://github.com/alantech/alan/pull/385 were landed so going too far down the parallelization spectrum could hurt performance badly as serialization costs rise without understanding the most common workloads.
@depombo what about a simple heuristic to choose sequential versions when the number of Tokio tasks crosses some multiple of CPU cores?
Not saying we do that now, but it could be a good first step.
@depombo what about a simple heuristic to choose sequential versions when the number of Tokio tasks crosses some multiple of CPU cores?
Not saying we do that now, but it could be a good first step.
That's interesting idea worth looking into. It might be a flaky heuristic because tokio tasks could be cpu or io bound though. I also don't know that tokio exposes the number of outstanding tokio tasks, but it should be possible somehow
@depombo what about a simple heuristic to choose sequential versions when the number of Tokio tasks crosses some multiple of CPU cores? Not saying we do that now, but it could be a good first step.
That's interesting idea worth looking into. It might be a flaky heuristic because tokio tasks could be cpu or io bound though. I also don't know that tokio exposes the number of outstanding tokio tasks, but it should be possible somehow
We could make it our own tracking metadata and simply skip actual io opcodes? The concern is it getting out-of-sync and eventually the AVM starts doing something weird because it think there's too many (or too few) concurrent tasks.
There are many ways to fan out parallel array operations across cores. Testing how to run https://github.com/ledwards/advent-2020/blob/main/day01b.ln as fast as possible via https://github.com/alantech/alan/pull/383 siloed me into writing parallel configurations that were over-optimized for that specific workload and which don't perform well when the size and shape of the data is different. Rayon's work stealing model performs best when parallelizing heterogenous workloads of different sizes and shapes by keeping serialization costs to a minimum. We left that model because it kept the AVM a lot leaner for now and that was the right decision in order to build incrementally. However to introduce a static parallel configuration that does not provide a work stealing configuration without properly understanding what are the most common workloads in order to maximize parallelization wins and minimize serialization cost could really hurt the performance of certain programs that might actually be quite common.
For example the three following code snippets will perform better depending on the nature of the Alan program and it is not clear which one to go for without benchmarking different workloads or using a work stealing mechanism. The first chunks up the work to be done in the array across greenthreads, but awaits on the work before proceeding to the next chunk:
The second also chunks the array computation necessary to greenthreads, but simultaneously awaits for them:
The third just awaits for the computation to finish across each chunk of the array without delegating to a new greenthread: