Open MridulS opened 1 year ago
The current benchmarking scripts are not too portable, they should run in some automated fashion.
@MridulS I'd like to take up this issue, but I don't have any experience in building benchmarking infrastructures, so please guide me on this.
we can check if the average of all the speedup values(in heatmap, for graphs of different sizes and densities) is greater than 1, to ensure that parallel algos are more time efficient or we can also use pytest-benchmark
like below.
test_benchmarks.py
num,p=300,0.5
G = nx.fast_gnp_random_graph(num, p, directed=False)
H = nx_parallel.ParallelGraph(G)
@pytest.mark.benchmark
def test_algorithm_performance_G(benchmark):
result_seq = benchmark(nx.betweenness_centrality, G) #replace "betweenness_centrality" with the new algorithm added
@pytest.mark.benchmark
def test_algorithm_performance_H(benchmark):
result_para = benchmark(nx.betweenness_centrality, H)
Test output(pytest test_benchmark.py
) :
What are your thoughts on this? What all should I keep in mind before structuring it?
Thank you :)
@Schefflera-Arboricola yes, pytest-benchmark could be one way of doing this. We use ASV for networkx benchmarking. But it's possible we would need to come up with a way which incorporates benchmarks with networkx dispatching. Ideally this benchmark suite will be able to swap in any backend (graphblas, nx-parallel, cugraph) and run it against all of them. We still need to think a bit more about how to approach this :)
We need to have a quick way of using either github actions....
by "github actions" did you mean something like this: https://github.com/networkx/networkx/pull/6834 ? or something else?
Yes! I'll try to finish the one in NX main repo soon. I think it's already good to go.
Just adding this for reference here: https://conbench.github.io/conbench/
pytest-benchmarks --> cannot host like asv ; asv benchmarks --> nice tool to compare a library with its past versions but not the best option when we need to compare 2 libraries(i.e. networkx and nx-parallel here).
We need to have a quick way of using either github actions or scripts to run some crude benchmarks while developing new algorithms.