Closed nikohansen closed 8 years ago
My numbers are not fully comparable, because on one machine, I have installed only Matlab (Windows XP on an old 2009 Dell notebook) on the other only Octave (Windows 7 on a 2015 HP notebook). For both tests, the C code has been compiled for 32bit.
Octave runs in 123s...133s, Matlab in 48s...53s (numbers of 3 runs each).
If you add the data with, say, C, we have a common point of comparison.
Surprisingly, the C experiment also takes around 55 seconds (on the new Windows 7 notebook) and around 60 seconds (for the old Windows XP notebook). Almost the same numbers hold for java and python on the Windows 7 notebook.
Can this issue be closed? Or do we want to decrease the number of function evaluations in the example experiments further?
If 100+ seconds is actually a standard scenario, I am in favor of decreasing the number of evaluations.
The new rewrite of the Matlab wrapper (incl. the new example experiment with restarts) made the exampleexperiment.m under Octave a bit quicker. It takes now about 109 seconds for the biobjective experiment with a budget of 2DIM. Reducing the budget to DIM/2 reduces the time down to about 73 seconds in Octave. The same in C still takes around 55 seconds for a 2DIM budget and about 50 seconds for a DIM/2 budget.
Strange, what is going on here?
just to have a rough idea, on a 2014 MacBook Pro:
The startup time on Python is to a large part a consequence of touching each problem once to collect some information before to start the benchmarking. Feel free to provide some data for the missing entry.
EDIT: updated with below numbers.