Adds all the original IPC problem files to the repo, as well as some scripts that will randomly generate problem files with irrelevance and then run experiments on these. The experiments will ignore generated problems that are unsolvable and will save node expansions + times for planning, as well as times for translating/scoping respectively for each different problem across some fixed number of runs. Finally, it will report these statistics across all problems. Note that this PR is still a work-in-progress (but should be somewhat close to done!)
Currently, the workflows are as follows:
Pick a particular IPC problem and how much conditional irrelevance to generate. Then generate a set of problem files and save them in a particular folder.
Run the scoper + FD on all problem files in this folder.
We probably want to manually run (1) a bunch of times to find a good set of randomly-generated problem files that have irrelevance enough for us to showcase a difference in planning times.
Example commands:
First, generate a bunch of new randomly-generated files:
Adds all the original IPC problem files to the repo, as well as some scripts that will randomly generate problem files with irrelevance and then run experiments on these. The experiments will ignore generated problems that are unsolvable and will save node expansions + times for planning, as well as times for translating/scoping respectively for each different problem across some fixed number of runs. Finally, it will report these statistics across all problems. Note that this PR is still a work-in-progress (but should be somewhat close to done!)
Currently, the workflows are as follows:
We probably want to manually run (1) a bunch of times to find a good set of randomly-generated problem files that have irrelevance enough for us to showcase a difference in planning times.
Example commands:
First, generate a bunch of new randomly-generated files:
Next, run experiments on these: