Open sirandreww opened 2 days ago
What kind of "results" are you thinking of? As far as I know, this set of benchmarks has not yet been used as a survey of BDD packages. If it was done, more BDD packages should be added (see also #12). I hadn't thought to maintain a document with such information on this repository.
The best I can do, is create a list with papers that have used this set of benchmarks? But, those papers focus on a specific BDD package. That is, they often give a relative rather than an absolute view.
Adding a list of papers that contain benchmark results would be excellent.
I think it would be great if one could have a graph in the README
comparing the different implementations by speed and memory usage. That way the performance differences of these libraries is known, which would drive their improvement further.
With respect to : List of Publications
I have pushed a list of publications in dc57f16 .
@nhusung , I assume Configuring BDD Compilation Techniques for Feature Models didn't use this benchmarking suite but rather the newly added CNF benchmark is a generalisation for future work? Hence, it shouldn't be added to the list.
Yes, for that paper, we directly used oxidd-cli
. The recently added CNF benchmark is based on our findings in Configuring BDD Compilation Techniques for Feature Models, and corresponds to the “balanced” construction.
With respect to : Results
There are a lot of different variables one can tweak with respect to (1) the machine of choice, (2) the common settings used, and (3) the settings of each individual BDD package.
There exists the following comparative studies of BDD packages. Most of them are (unsurprisingly) from the 90s; so, a more up-to-date one would be beneficial to the community.
Again note, there are quite a lot of choices to be made when trying to make a "fair" comparison.
Things to work out
I'd have to think a bit about how I/we could do something like this (and how to keep it somewhat up to date). It very quickly turns into me (1) conducting a survey and/or (2) creating a BDD competition. I'm not entirely sure, I am keen on taking on either, right now.
These are just a few things one should have to make decisions on.
For this to be any use, one should then also need to include more BDD packages ( see also #12 ).
I think the best approach would be to provide all the relevant data and let the user decide. And since BDDs might be used in similar ways to SAT solvers, we can take inspiration from the SAT competition.
In conclusion 3 tracks:
Edit (Steffan): Tasks