Open beckernick opened 1 month ago
Seems like this would be an expansion of https://github.com/networkx/networkx/pull/6876
Thanks for raising this issue! I think this behavior ought to be controllable via a different environment variable and/or config option... and I do think this is important.
Today, you can add backend="cugraph"
to e.g. nx.from_pandas_edgelist
, but this is not longer "zero code change", so we'd like to make this story better.
I have begun to explore a solution to this in https://github.com/networkx/networkx/pull/7502
Is this a new feature, an improvement, or a change to existing functionality?
New Feature
How would you describe the priority of this feature request
High
Please provide a clear description of problem this feature solves
When we use nx-cugraph, we currently need to create the NetworkX graph on the CPU regardless of whether every algorithm we intend to use is supported by the cuGraph backend. As a result, we pay a non-trivial performance penalty converting between CPU and GPU graphs.
The new caching mechanism configurable via
CACHE_CONVERTED_GRAPH=True
was designed to address this problem, making it possible to only pay this cost once per graph if you're going to run multiple algorithms.But it would be great to avoid this cost in the first place by dispatching on the graph construction operators in addition to the algorithms. In the example below, we spend significant time in
from_pandas_edgelist
and_convert_graph
(the latter of which is only a one-time cost if we use caching).If I've already committed to using the cuGraph backend as the top priority backend, I'd ideally just create the graph on the GPU and only pay the CPU/GPU conversion cost if I need to fallback to the CPU.
But cuGraph supports
from_pandas_edgelist
and it's much faster (100ms vs 8s in this case):Describe your ideal solution
The following code should dispatch to the cuGraph backend for
from_pandas_edgelist
in addition topagerank
.Describe any alternatives you have considered
No response
Additional context
No response
Code of Conduct