Closed nikohansen closed 1 month ago
Places to review in the code:
pproc.DataSet.budget_effective_estimates
is a by-instance dictionary of sum(maxevals) / max(1, #successes)
dictMaxEvals2
is passed as maxeval2
to pprldmany.plotdata
which is displayed as max_evals_single_marker_format == '+'
and created like
maxmed = percentile(runlengthunsucc)
if len(runlengthsucc):
maxmed = max((maxmed, percentile(runlengthsucc)))
dictMaxEvals2[keyValue].append(maxmed)
The bug: 'x'
is not divided by dimension while it should.
See #4 for the updated caption.
This is a tentative caption for the new crosses which are in better alignment with the new experimental restarts setup.
Caption: Two big (thin) crosses appear on the graphs when at least one trial was unsuccessful: they depict the median over the considered functions of +) the larger of the two 90%tiles of runtimes from all successful/unsuccessful single trials of all instances and of x) the 90%tile of the ERT_1 = sum(runtimes) / max(1, #successes) for each instance. Usually, +) < x) and the latter cross might be missing. A small dot indicates the last step of the step function. Roughly speaking, the runtimes between +) and x) are generated by bootstrapping of data. Data should be interpreted with great care (or not at all) beyond x) and also beyond +) if the termination related to +) was not induced by the algorithm but imposed by the user.
Question: if all instances were run once with the same budget and are unsuccessful, both crosses should be at the same place? Yet, it seems they are not!? EDIT: fixed.
Feature request: ~The
x
cross is only relevant wheninstances_are_uniform
.~ Hence we should probably omit this cross otherwise (like forbbob-biobj
)? EDIT: the cross considers only within-instance repetitions, that is, it does not assume uniformity of different instances. However, in the multiobjective case, also instance repetitions seem not meaningful to generate performance data via bootstrapping. They should rather be considered as experiment repetitions.