rapidsai / cudf

cuDF - GPU DataFrame Library
https://docs.rapids.ai/api/cudf/stable/
Apache License 2.0
8.34k stars 890 forks source link

[FEA] Improve scaling of data generation in NDS-H-cpp benchmarks #16987

Open GregoryKimball opened 1 week ago

GregoryKimball commented 1 week ago

Is your feature request related to a problem? Please describe. In the NDS-H-cpp benchmarks, the memory footprint of data generation is larger than the memory footprint of query execution. This ends up limiting us to <=SF10 on H100 GPUs. Perhaps as much as 10x smaller than we can go with pre-generated files.

Describe the solution you'd like There are a few solutions we could use:

Additional context On A100, we can run query sizes up to SF100 or so, but the generator only goes to ~SF10.

karthikeyann commented 1 week ago

We could create managed memory for data generation use it and destroy after writing the parquet data to host. Use this result for queries. But remember, host to device transfer is included as part of scan (parquet read) in benchmark time as well. update API to accept cuio_source_sink_pair

GregoryKimball commented 1 week ago

Thank you @karthikeyann for your comments.

In the end I would like to be able to run SF100 with CUDA async MR on A100. If the data gen uses managed MR and the timed queries use async MR, that would work great.