Open fkarger opened 5 years ago
This implementation doesn't include all of the original code's linear algebra tricks which help it scale, so the per-iteration cost is roughly cubic with the input dimension (if you use the default npt=2n+1, worse if objfun_has_noise=True). Improving scalability is an ongoing project for me, but in the meantime you could try reducing npt.
Also, on my machine, I sometimes have an issue with NumPy calling LAPACK, and LAPACK unnecessarily using too many cores, making it much slower than it should be. Setting the environment variables export OPENBLAS_NUM_THREADS=1
and export NUMEXPR_NUM_THREADS=1
helps a lot in this case.
Thank you very much for the hints! I am really looking forward to the better scaling version :)
I tested bybobyqa e.g. with the rastrigin function for different input dimensions. The numerical results were very good but I got an increasingly strong speed penalty. For 50 dimensions it was already very noticeably. I also tested the BOBYQA implementation of apache.commons.math (Java) which does not seem to have this issue (but other issues). Probably a profiler could easily pinpoint the reason.