This PR introduces the Dask module to support PyRDF analysis execution through a dask.distributed scheduler. The connection to the scheduler is either started remotely or locally depending on whether the user provides a scheduler address in the configuration of the Dask instance
The execution of the graph is done through the dask.delayed mechanism that wraps both the mapper and reducer functions. Data ranges are mapped and the results are recursively reduced until there is only one list of merged action results. A call to dask.distributed.Future.compute returns the final result to the user.
A new entry has been added to the options of PyRDF.use accordingly.
TODO:
[ ] add support to distribute files to the workers
Thank you! Yeah it's a very basic first implementation, I wanted to push it to also let others try it for now. I'll make sure to add more tests and docs before merging
This PR introduces the Dask module to support PyRDF analysis execution through a dask.distributed scheduler. The connection to the scheduler is either started remotely or locally depending on whether the user provides a scheduler address in the configuration of the Dask instance The execution of the graph is done through the dask.delayed mechanism that wraps both the mapper and reducer functions. Data ranges are mapped and the results are recursively reduced until there is only one list of merged action results. A call to
dask.distributed.Future.compute
returns the final result to the user. A new entry has been added to the options of PyRDF.use accordingly.TODO: