Open Max-Meldrum opened 3 years ago
How does glommio compare to tokio's multi-threaded work stealing runtime when we set the number of threads to number of cores?
How does glommio compare to tokio's multi-threaded work stealing runtime when we set the number of threads to number of cores?
Arcon is mainly a streaming system and the tasks/operators that it runs are supposed to run forever basically. From my understanding, tokio's work-stealing runtime is for more general-purpose use-cases where one spawns a lot of short/long tasks. Whereas Arcon mainly has long-running tasks and these would benefit from being pinned to cores for cpu and memory locality. Also, Gloomio's application-level scheduling/priority is quite suitable for data processing systems as they may have to run important tasks next to the main task.
So, Gloomio is suitable for data processing systems where you shard the data and have separate pinned cores processing it. While the work-stealing approach would be better if the data distribution is skewed and some cores are not being used.
Related to #214
Not a priority as of now, but would be interesting to look into a TpC (Thread-per-Core) model with application-level cooperative scheduling for the data path of the runtime. We would of course have to evaluate it to our current approach and look at the tradeoffs and what it would mean to our design in general.
Interesting Rust crate to follow: https://github.com/DataDog/glommio
References: https://www.datadoghq.com/blog/engineering/introducing-glommio/ https://vectorized.io/blog/tpc-buffers/ https://helda.helsinki.fi//bitstream/handle/10138/313642/tpc_ancs19.pdf?sequence=1 https://www.scylladb.com/product/technology/shard-per-core-architecture/ http://vldb.org/pvldb/vol14/p3110-katsifodimos.pdf