Query execution works by building up a tree of futures to execute each partition in each query stage.
The root node of each query stage is a RayShuffleWriterExec. It works by executing its child plan and fetching all of the results into memory (this is already not scalable because there could be millions or billions of rows). It then concatenates all of the batches into one large batch, which is returned. This large batch is then stored in Ray's object store and will be fetched by the next query stage.
The original disk-based shuffle mechanism (that was removed in https://github.com/apache/datafusion-ray/pull/19) did not suffer from any of these issues because query results were streamed to disk in the writer and then streamed back out in the reader. However, this approach assumes that all workers have access to the same local file system.
One approach would be to bring back the original shuffle code and then add a mechanism for reading data from another node by implementing a gRPC based service (such as Arrow Flight protocol) in each worker.
Query execution works by building up a tree of futures to execute each partition in each query stage.
The root node of each query stage is a
RayShuffleWriterExec
. It works by executing its child plan and fetching all of the results into memory (this is already not scalable because there could be millions or billions of rows). It then concatenates all of the batches into one large batch, which is returned. This large batch is then stored in Ray's object store and will be fetched by the next query stage.The original disk-based shuffle mechanism (that was removed in https://github.com/apache/datafusion-ray/pull/19) did not suffer from any of these issues because query results were streamed to disk in the writer and then streamed back out in the reader. However, this approach assumes that all workers have access to the same local file system.