PKU-DAIR / open-box

Generalized and Efficient Blackbox Optimization System
https://open-box.readthedocs.io
Other
356 stars 52 forks source link

Optimizer is running slow #73

Open Dalton2333 opened 7 months ago

Dalton2333 commented 7 months ago

I was running an optimization using default Bayesian optimization advisor, it takes around 400 s to get a suggestion at Iter 400. Is this normal and how to make it run faster?

jhj0411jhj commented 6 months ago

Hi @Dalton2333, there are many reasons that may cause optimization to slow down:

  1. The typical reason is using GP as surrogate model. When there are hundreds of observations, GP is significantly slower than the PRF model. It is normal and you can switch to PRF instead in this case. However, please also check the following reason that may cause GP to be extremely slow.
  2. If there are too many CPU cores on the machine (e.g. 100 CPU cores), GP might be extremely slow because: (1) GP attempts to occupy all CPU cores to compute; (2) GP is running in a docker environment and the CPU usage is limited, or multiple experiments are running simultaneously. Then for each GP, it occupies all CPU cores on the machine but the occupancy rate on each core is extremely low. To fix this, add the following code in the beginning of your startup script (at least before import numpy):
    import os
    NUM_THREADS = "1"
    os.environ["OMP_NUM_THREADS"] = NUM_THREADS         # export OMP_NUM_THREADS=1
    os.environ["OPENBLAS_NUM_THREADS"] = NUM_THREADS    # export OPENBLAS_NUM_THREADS=1
    os.environ["MKL_NUM_THREADS"] = NUM_THREADS         # export MKL_NUM_THREADS=1
    os.environ["VECLIB_MAXIMUM_THREADS"] = NUM_THREADS  # export VECLIB_MAXIMUM_THREADS=1
    os.environ["NUMEXPR_NUM_THREADS"] = NUM_THREADS     # export NUMEXPR_NUM_THREADS=1

    Or add the above options as environment variables in the startup command.

  3. The number of objectives or constraints is large.

Hope this can help you.