rapidsai / cudf

cuDF - GPU DataFrame Library
https://docs.rapids.ai/api/cudf/stable/
Apache License 2.0
8.01k stars 870 forks source link

[FEA][JNI] expose an RMM allocator API in cuDF JNI #9209

Open abellina opened 2 years ago

abellina commented 2 years ago

We currently support in the Java side some combinations of allocators and wrappers that make it easy to setup pools (default, arena, and now async), backing allocators (cuda, managed memory). But it is getting to the point that the options in these allocators don't all fit the min/max pool size pattern.

For general users of the cuDF JNI side of things the above combinations may not be desired. We propose exposing an RMM that would allow better composability of the various allocators we use and test with (note this is a good chance to clean up some tech debt as well for allocators we no longer use or intend to support).

abellina commented 2 years ago

One issue that was raised was that a DeviceMemoryBuffer could instead be composed using a java "MemoryResource", rather than having to create other classes like CudaMemoryBuffer to accomplish this. We could then pass other resources like a potential "AsyncMemoryResource" instance, or some hierarchy of other resources (the logging resource + pool for example). Ideally this issue would tackle this problem as well.

github-actions[bot] commented 2 years ago

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

github-actions[bot] commented 2 years ago

This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.