quiver-team / torch-quiver

PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.
https://torch-quiver.readthedocs.io/en/latest/
Apache License 2.0
293 stars 36 forks source link

有个疑问,Quiver采样的时候在batch_size或者采样的节点比较小的时候,性能加速不明显,如何进行加速了? #111

Closed SoupFree closed 2 years ago

ZenoTan commented 2 years ago

If it is hard to utilize GPU's massive parallelism in some cases and we could not have larger sampling task, you can just adopt CPU sampling, which might be faster.