Helpful in general because we want code that is easily re-runnable that generates a summary benchmark.
We would like to do the following, where we generate a bunch of graphs and datasets from those graphs and varying i) the number of nodes and ii) the edge probability and then testing our implantation for:
speed
correctness
We can vary the edge probability model to be Erdos-Renyi, Weighted-Edge-Degree, and any of the common networkx models.
Ideally, this suite of benchmark scripts is easy to plug in an alternative algorithm e.g. causal-learn, so we can explicitly run the same benchmarks side-by-side.
Helpful in general because we want code that is easily re-runnable that generates a summary benchmark.
We would like to do the following, where we generate a bunch of graphs and datasets from those graphs and varying i) the number of nodes and ii) the edge probability and then testing our implantation for:
We can vary the edge probability model to be Erdos-Renyi, Weighted-Edge-Degree, and any of the common networkx models.
Ideally, this suite of benchmark scripts is easy to plug in an alternative algorithm e.g. causal-learn, so we can explicitly run the same benchmarks side-by-side.
cc: @jaron-lee