LambdaConglomerate / x9115lam

2 stars 0 forks source link

Spread #23

Closed meneal closed 8 years ago

meneal commented 8 years ago

So spread is working perfectly well now, but it has created an annoying situation where we can't really batch jobs together where we can run a large number of optimizers/models and then run spread at the end.

The reason for that is the fact that the spread implementation that we're using is rigged to only deal with one txt file at a time, so the way I'm dealing with that currently is to clear the input file for spread before each run by putting the frontier files into the old_obtained directory and then putting the new output frontier into Obtained_PF.

Rather than run spread every time we run the optimizer I may add in the same sort of flag that we have for hypervolume and then create some sort of batch runner script where we can put the files into Obtained_PF one at a time and process them. Or alternatively hack the spread implementation to deal with multiple files.

One way or the other I think that this is at least holding things back a bit. Not only that we're not exactly maintaining the spread data in any sensible way yet either. So we need to have that setup in some way.

meneal commented 8 years ago

As a stopgap on this just to get it rolling I've added a flag for spread. So without flags nothing will run at the end of a script. Running the following will run spread:

sh run.sh -s

Keep in mind that at least for now you'll need to only run one optimizer and one model at a time to get spread metrics.

You can run both spread and hypervolume by running the following:

sh run.sh -x

Running with no options will just run the script and no metrics will be run. Next thing to do is set up a batch runner for spread and then a separate shell script or python script to run the metrics and save their output somewhere.