Open Ulfgard opened 7 years ago
Hi @Ulfgard,
Thanks for reporting your needs. Is the --expensive
option what you want? We turned off the automated detection of the expensive setting by default, but you can always invoke the postprocessing with the above switch (and I don't think that it has ever been on for the biobjective test suite). However, the --expensive
switch also changes the displayed targets from uniform on the logscale to runlength-based targets that depend on the reference algorithm.
That's probably not what you exactly want, but we don't have yet an option which allows to fully ignore input values above a certain budget, see also #18.
To understand this: the --expensive option could make things faster for small budgets?
--expensive
is not meant to be a speed optimisation option. It displays different targets which tend to be more interesting when only small budgets have been tested.
Are you certain that producing the best 2016 graph is what takes the majority of time? In this case we should have at least a work-around, e.g. not generating and displaying this graph at all.
I would assume that this is the reason because the graph seems to be regenerated every time the plots are generated (or am i wrong? I think the graph looks slightly different every time). So for small budgets this graph should take roughly 2-3 orders of magnitudes longer to generate. I know that in the previous version before this this graph was added, small targets were quite fast - i used short runs often during tuning. Is there a quick way to deactivate this graph?
A fix would be to only generate the graph up to the largest iteration count in the data. The graph itself is still interesting for comparison
Hi @Ulfgard,
Yes, there is a way to deactivate the graph but at the cost of not having relative but absolute values in the tables.
You can achieve this by not specifying any reference_algorithm_filename
in code-postprocessing/cocopp/testbedsettings.py
within the corresponding Testbed
class (line 293 in my case for the bbob-biobj
test suite):
reference_algorithm_filename = ''
For this to have an effect, you must of course rebuild the cocopp
module via python do.py install-postprocessing
. The effect was a time saving of about 20% in my case (only very roughly measured).
Just to clarify before we'll probably close this issue: we have now also a way to do the above without re-installing the python module for the postprocessing. Simply do the following in an (i)python console or in jupyter:
In [1]: import cocopp
In [2]: cocopp.testbedsettings.GECCOBiObjBBOBTestbed.settings['reference_algorithm_filename'] = None
In [3]: cocopp.config.config()
In [4]: cocopp.main('SMS-EMOA! NSGA-II!') # as an example
to turn off the display of the reference algorithm.
The issue in short: when you run an algorithm for a small number of function evaluations, e.g. 100D, the postprocessing will still generate the ECDF graphs for the full best 2016 benchmark. This takes several minutes while in the old coco release, this would be very fast. There does not seem to be an option to prevent this from happening.