rapidsai / cudf

cuDF - GPU DataFrame Library
https://docs.rapids.ai/api/cudf/stable/
Apache License 2.0
8.41k stars 899 forks source link

[ENH] Support more input data layouts in `cudf.from_dlpack` #10849

Open wence- opened 2 years ago

wence- commented 2 years ago

Related to #10754, the current implementation of from_dlpack requires unit-stride fortran order, and produces appropriate error messages in the unsupported cases

Consider

import cudf
import cupy
a = cupy.arange(10)
b = a[::2]
c = cudf.from_dlpack(b.__dlpack__())
=> RuntimeError: from_dlpack of 1D DLTensor only for unit-stride data
b = cupy.broadcast_to(a[1], (10,)) # b is stride-0
=> RuntimeError: from_dlpack of 1D DLTensor only for unit-stride data

a = cupy.arange(12).reshape(3, 4).copy(order="F")
b = a[::2, :]
c = cudf.from_dlpack(b.__dlpack__())
=> RuntimeError: from_dlpack of 2D DLTensor only for column-major unit-stride data

Since from_dlpack copies in all cases right now, I think that things can be handled like so:

  1. Non-fortran-order: useful error
  2. unit-stride: current cudaMemcpyAsync one column at a time
  3. fastest-dimension is stride-0 (broadcasted arrays): std::fill for the 1D case, just getting the strides right for the 2D case
  4. fastest-dimension is stride-N (sliced arrays): cudaMemcpy2DAsync with appropriate choices of pitch and stride for the source array

However, I'm not really sure of the performance implications of these choices, and if the current approach of producing an error and requiring that the caller copy to contiguous fortran-order first before calling from_dlpack is not better. For example, for case 4 is it faster to copy to a contiguous buffer first rather than copying column by column?

wence- commented 2 years ago

The silently bad behaviour was turned into actual logic_errors in #10850, it would still be possible to support more fortran-order or broadcasted inputs if desired.

github-actions[bot] commented 2 years ago

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

github-actions[bot] commented 2 years ago

This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.