Closed nhansendev closed 7 months ago
After thinking about it more this could just be the code "exploding" after the initial evaluation, which would produce worse values for f_best until it eventually calms back down.
Example plots:
dim = 1000:
dim = 100:
After adjusting lamb to 12 instead of 6 the solution converges as expected instead of "stalling", so I guess there isn't really an issue as long as the parameters are adjusted properly.
@Obliman Thank you for the interesting report. I was not aware of this phenomenon, but I believe, as you already mentioned above, the reason is too small lamb. Since lamb is usually set as about 4 + int(3 * log(d)), which is a minimal recommendation value and can be set larger, lamb=6 seems to be too small for d>=100. Fig.4 in README or the paper investigates the performance sensitivity wrt. lamb, so please see it if you are interested.
While testing the simple example code that optimizes
np.sum(x**2)
I noticed that the optimization process seems to stop for a while when the dimension is increased to a large value (e.g. ~1000).Code to reproduce:
Example output (no progress for ~11 iterations of 100 optimizer steps): 353.90549790931476 353.90549790931476 353.90549790931476 353.90549790931476 353.90549790931476 353.90549790931476 353.90549790931476 353.90549790931476 353.90549790931476 353.90549790931476 353.90549790931476 339.25010722424685 286.1063896053389 247.91645354571136 213.37571948612083 183.2348610899695 154.01953822981847 134.00792653279 107.80148620615931 87.5663151895065 74.59439720712179 61.14215951634783 ...
Note that this does not occur when the dim is changed to 100 instead (with optimizer steps of 10).
I just wanted to check if this behavior is at all expected, or if something has gone wrong.