Open VibhuJawa opened 2 years ago
I don't know if this is something we are ready to do yet for the general use case, JIT-Unspill for CuPy is still very slow.
cc @madsbk
I don't know if this is something we are ready to do yet for the general use case, JIT-Unspill for CuPy is still very slow.
cc @madsbk
I thought we switched off splilling for cuPY in this PR https://github.com/rapidsai/dask-cuda/pull/568
JIT-Unspill should support CuPy arrays by always un-spilling them before task execution: https://github.com/rapidsai/dask-cuda/pull/856.
JIT-Unspill should support CuPy arrays by always un-spilling them before task execution: #856.
But last I tested it was slow, in slower than default spilling https://github.com/rapidsai/dask-cuda/issues/840#issuecomment-1026013073 . I should probably revisit that.
This issue has been labeled inactive-30d
due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d
if there is no activity in the next 60 days.
We should switch
Jit-Unspill
on by default when RMM is being used. This configuration is often over looked by users when setting up their clusters which causes them not to use memory effectively .We should default this to True for users when they are using dask-cuda clusters with RMM POOL.
Additional Context:
A couple of cugraph+dask users recently ran into this issue and adding jit-unspill helped them relieve that pressure.
CC: @rlratzel , @jnke2016 , @quasiben , @randerzander