Open brockho opened 7 years ago
By default adaptive with a very coarse grid is probably a good compromise, i.e. allowing only very few values. E.g., 1e7
(current default) if the maxevals > 1e3
and 1e5
otherwise seems reasonable to me and might already be sufficient.
Rationale: simulated restarts with 15 trials may increase the expected runtime by a factor of 15 (in case of 1 success). Beyond a factor of 100, the runtime distribution graph should always be pretty flat.
Do we want the postprocessing to have a fixed (as now) or an adaptive maximum value displayed on the x-axis (of the ECDF plots)? And if the answer is yes, adaptive, how should this be implemented to still allow for relatively easy comparison across graphs for different algorithms?