Closed krfricke closed 3 years ago
Please note that we currently have no general support to load from files into GPU memory directly.
That will require direct access to external memory from GPU. I think it's best for a 3-party library to carry out this kind of task. WDYT?
Yes @trivialfis I agree loading from files directly into GPU memory is a task for some other library.
What we could do in this integration though is that currently when we're loading from multiple CSV/Parquet files, we're constructing pandas dataframes in CPU memory and concatenate them. When creating device quantile dmatrices, we convert the (usually full) dataframe to cupy arrays in the iterator and pass them to the device matrix.
Instead we could not create pandas dataframes, but create cupy arrays to begin with. We would then not hold a copy of the full data in CPU memory. For construction, data will still flow through CPU memory but only until the cupy array is built.
Does that make sense?
Thanks for the explanation, yeah that make sense.
RayDeviceQuantileDMatrix became incompatible with recent changes. This PR makes sure it works again.
Please note that we currently have no general support to load from files into GPU memory directly, avoiding CPU memory. Thus the RayDeviceQuantileDMatrix works with conversion (or when cupy arrays are provided directly). In the future we might want to introduce interfaces for loading from files into GPU memory.
Closes #65