Open olafmersmann opened 2 weeks ago
Maybe this information is helpful: the only functions for which the regression test never fails are 109, 126
The split among different noise models is almost even ...
Unclear how we could have thought that the new and the old code are doing the same:
x = [0,0]
print('new code base (development branch):')
import cocoex
mysuite = cocoex.Suite('bbob-noisy', '', '')
ffn = mysuite.get_problem_by_function_dimension_instance(101, 2, 1)
for _ in range(3):
print(ffn(x))
print('old code base (v15.03):')
import bbobbenchmarks as bn
import numpy as np
ffo = bn.F101(1)
for _ in range(3):
print(ffo([x])[0])
gives on my machine (Windows 10 with python 3.11.5 and the latest COCO version from the development branch):
new code base (development branch):
80.88585114048874
80.86336377045433
80.8817960944071
old code base (v15.03):
80.88817435964651
80.90779837228773
80.88195820355547
Note: I had to replace xrange
with range
and iteritems
with items
in the bbobbenchmarks.py
file in order to make the old code work with python 3.
My guess for the difference: we use different random number generators in both codes (to be checked).
Together with @nikohansen and @ttusar, we found that the noise in COCO is actually non-deterministic :-) Hence, we need to
bbob-noisy
from the "normal" regression testfunctionobject._evalfull(x)
to get noisy and noiseless value of x
)