Open VibhuJawa opened 3 years ago
@VibhuJawa apologies, for not responding earlier. When I tried this the workflow took +45 minutes at which point I killed it. Is this your experience ?
@VibhuJawa apologies, for not responding earlier. When I tried this the workflow took +45 minutes at which point I killed it. Is this your experience ?
Nope, It takes 1min 34.9s
for me . Just tried it again.
It will also reproduce it at 1/10th
the factor , it takes 7s
locally for me.
ar = np.ones(shape=(1_000_000,512),dtype=np.float32)
dask_ar = da.from_array(ar,chunks=40_000).map_blocks(cp.asarray)
dask_ar = dask_ar.persist()
progress(dask_ar)
client.has_what()
wait(client.rebalance(dask_ar))
client.has_what()
@VibhuJawa sorry again for the delay. The current rebalancing scheme only takes into account memory consumption on the host: https://github.com/dask/distributed/blob/ac55f25d230c7144ed618d1d4374f254303a4e0a/distributed/scheduler.py#L6104-L6113
This will take some time to think through how to rebalance while taking into account current GPU consumption and any device memory limit settings. Device memory limits are set in dask-cuda will all the rebalancing occurs in the scheduler
This issue has been labeled inactive-90d
due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.
This issue has been labeled inactive-30d
due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d
if there is no activity in the next 60 days.
Problem: Rebalance with dask-cuda does not rebalance effectively
Problem Context
We can not seem to rebalance with
dask-cuda
effectively which leads to imbalanced GPU usage.This becomes a problem across a bunch of workflows especially those that involve a mix of ETL and machine learning .
How a lot of machine learning algorithms like (XGboost, cuml.dask.knn etc, cuGraph) work today is that they run a local portion of the algorithm on the data on each GPU and then do an all reduce like operation. If you have imbalanced data on one of the GPUs you get memory limited on that GPU which leads to memory failures. If the data was balanced equally we wont have these issues.
Minimal Example:
Start Cluster
Create Imbalanced Data on workers
Try and Fail with Rebalancing
GPU Usage for context:
Notebook Example:
https://gist.github.com/VibhuJawa/eb2d25c0c6fddeebf0e104b82eb8ef3e
CC: @randerzander , @quasiben , @ayushdg .